We recently migrated to Airflow 2.3.3.
We get some warnings followed by exception saying next_execution_date is deprecated , so use data_interval_end instead.
But when we changed, we got some failures regarding the time difference between these 2 macros.
Also, i checked the code, both are using UTC timezone.
Related
When attempting to add a maintenance exclusion to my GKE cluster to prevent minor upgrades to the control and data layer between 1/25/23 and 4/30/23, I receive the following error:
gcloud container clusters update <my-cluster-name> \
--add-maintenance-exclusion-name suspend_upgrades_past_eol \
--add-maintenance-exclusion-start 2023-01-25T00:00:00-05:00 \
--add-maintenance-exclusion-end 2023-04-30T23:59:59-05:00 \
--add-maintenance-exclusion-scope no_minor_or_node_upgrades
ERROR: (gcloud.container.clusters.update) ResponseError: code=400, message=MaintenancePolicy.maintenanceExclusions["suspend_upgrades_past_eol"].endTime needs to be before minor version 1.21 end of life: (2023-1). See release schedule at https://cloud.google.com/kubernetes-engine/docs/release-schedule.
According to an email I received from GCP, maintenance exclusions for GKE clusters running 1.21 should be able to create maintenance exclusions up to April 30th 2023. I believe my command should've been valid especially considering I got it directly from the GCP email I received. I've also tried reducing the time range to 4/28/23 to no avail.
I'm running the latest version of gcloud:
Google Cloud SDK 415.0.0
alpha 2023.01.20
beta 2023.01.20
bq 2.0.84
core 2023.01.20
gsutil 5.18
Any clue on what I'm doing wrong or ideas on how to get around this are appreciated.
I belive you can do this in 2 parts:
Part 1: Set maintenance exclusion window for no updates until Feb 28th.
You can use a maintenance exclusion window to set a 30-day maintenance exclusion window which will let you push it off until Feb 28th.
Note there are 3 types of maintenance exclusion windows. 2 will still complain about can't go past the EoL date, but the 3rd will work. (the 2 that fail are the ones titled "No minor or node upgrades" and "No minor upgrades", they're the one's that can go up to 90 / up to 180 in cases where EoL(end of life) isn't a factor), the one that will work is the up to 30-days no_upgrades option.
^-- You may need to temporarily change your release channel to No Channel / Static version in order to set that option. (It's a reversible change)
Part 2: That longer than 30 day delay option that's not working today, try it again 1-27 days from now and it "might" work, you'll be able to wait 1-27 days thanks to part 1
I've heard an unconfirmed rumor of a not yet released update, of a change that could be released as soon as Feb 1st, 2023 (if not then potentially sometime before the end of Feb 28th, with early Feb being likely), which would then allow one of those 90 day no update exclusion windows to be put into place to extend the deadline for the forced update to 1.22 as far as April 30th, 2023, but that's the absolute deadline. Either way don't delay / try to update asap. (I'd also recommend you don't depend on an extension to April 30th/try to update by Feb 28th, as I could be incorrect/I don't work at Google.)
^-- O right you'll have to temporarily switch your release channel from No Channel / Static version to Stable Release channel in order to get the other 2 options.
(Side Note: It's my understanding that the whole reason for the potential/future/not yet released update I'm referring to is that this is being done as an exception. Normally forcing an auto upgrade would be a non-issue, but 1.21 -> 1.22 had some API deprecations that can cause breakage if not ready, which explains why they're making an exception to slightly extend past the end-of-life deadline of Jan 31st, 2023.)
Update - The change came through:
Here's a working example:
Note the linter/policy enforcement is extremely finicky + fails without proper error messages, but you can get it to work if you thread the needle just right:
No minor or node upgrades from 1/29/23 - 4/29/23 will fail (start date too early)
No minor or node upgrades from 1/31/23 - 4/30/23 will fail (end date too late)
No minor or node upgrades from 1/31/23 - 4/29/23 will work (goldie locks)
I'm using the latest ibm_watson_machine_learning SDK (python)
Until a few days/weeks ago my code was working fine but now I get an error when running
client.repository.store_model(model='./model.tar.gz', meta_props=model_metadata)
Here is some sample code:
https://github.com/IBMDecisionOptimization/oplrunonwml
Exception has occurred: IndexError
list index out of range
File "C:\Temp\oplrunonwml\oprunonwmlv2.py", line 126, in main
model_details = client.repository.store_model(model='./model.tar.gz', meta_props=model_metadata)
File "C:\Temp\oplrunonwml\oprunonwmlv2.py", line 215, in <module>
main(sys.argv[1:])
I get this error while using various different models (OPL/Cplex/Docplex) and they all fail with this error.
What's strange, is that the model is uploaded correctly in the Deployment Space and I can use it without problems in deployment/jobs on the UI or on other scripts.
The code was working fine without any changes a few weeks ago so I assume something's changed on the API side
Update:
I'm using a Cloud Lite account.
I'm also using the latest version of the SDK
client = APIClient(wml_credentials)
print(client.version) # 1.0.29
print(client.version_param) #2020-08-01
I deleted all my IBM services (ObjectStorage,WatsonStudio) and created new ones but I still get the same error.
I would suspect the WML v2 instances deployement.
*** With V2 plan, user need to use updated Python SDK (ibm-watson-machine-learning 1.0.38) ***
If you had a v1 iunstance before and according to your plan, it might have been keeping working withoutmirgation for a while.
May be you reached the end of this compatibility period.
Can you clarify your plan type?
See https://medium.com/#AlainChabrier/migrate-your-python-code-for-do-in-wml-v2-instances-710025796f7
Alain
I am new to liquibase and is needed to know whether liquibase 3.5.x using oracle 12 c and ojdbc7 have a default timeout when executing any changeset. I have tried executing very delayed changesets which execute up to 24 hours and liquibase still doesn't timeout even if the changeset is 24 hour delayed. Is there a default timeout when liquibase changeset expires?
If yes, I would like to change the default value to custom defined value.
I have all the source code for liquibase 3.5.x downloaded from https://github.com/liquibase/liquibase/tree/3.5.x.
I have already seen the post explaining an explicit way of defining the jdbc timeout How can I set the Liquibase database connection timeout and retry count?. But I am looking into something relating to default timeout in liquibase.
If there is a default timeout defined in liquibase source code, please guide me where can I find it and customize the timeout according to my requirement.
In the source code, I can see some "timeout"s defined in the postgresql files but cannot find any for the oracle. Please help me resolve this issue. Thanks.
I found a simple solution to the above mentioned question.
For Oracle JDBC connection:
Liquibase comes with many parameters and one of them is --driverPropertiesFile=/path/to/file.properties, where we can specify the required jdbc parameters and link that properties file to the liquibase update command. An example file.properties can have oracle.jdbc.ReadTimeout=6000 (time in milliseconds).
- It is required to "liquibase releaseLocks" after a timeout.
I'm am getting an error when deploying ADF pipelines. I don't understand how to resolve this error message:
Pipeline Populate SDW_dbo_UserProfiles from SDW_dbo_CTAS_ApptraxDevices is in Failed state. Cannot set active period Start=05/30/2017 00:00:00, End=05/29/2018 23:59:59 for pipeline 'Populate SDW_dbo_UserProfiles from SDW_dbo_CTAS_ApptraxDevices' due to conflicts on Output: SDW_dbo_UserProfiles with Pipeline: Populate SDW_dbo_UserProfiles from SDW_dbo_Manifest, Activity StoredProcedureActivityTemplate, Period: Start=05/30/2017 00:00:00, End=05/30/2018 00:00:00
.
Try changing the active period or using autoResolve option when setting the active period.
I'm am authoring and deploying from within Visual Studio 2015. All of my pipelines have the same values for Start and End.
"start": "2017-05-30T00:00:00Z",
"end": "2018-05-29T23:59:59Z"
How do I resolve this issue?
Visual Studio can be fun sometimes when it comes to validating your JSON because not only does it check everything in your solution it also validates against what you already have deployed in Azure!
I suspect this error will be because there is a pipeline that you have already deployed that now differs from Visual Studio. If you delete the affected pipeline from ADF in Azure manually and then redeploy you should be fine.
Sadly the tooling isn't yet clever enough to understand which values should take presedence and be overwritten at deployment time. So for now it simiply errors because of a mismatch, any mismatch!
You will also encounter similar issues if you remove datasets from your solution. They will still be used for validation at deployment time because the wizard first deploys all new things before trying to delete the old. I've fed this back to Microsoft already as an issue that needs attention for complex solutions with changing schedules.
Hope this helps.
I am trying to run a workflow in hortonworks cluster using oozie.
Getting the following error:
Error: Invalid workflow-app, org.xml.sax.SAXParseException: cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'hive'.
does anyone know the reason?
Atleast a sample hive workflow.xml which can be run on hortonworks distribution would be helpful??
This has to do with the first line of your workflow:
<workflow-app name="${workflowName}" xmlns="uri:oozie:workflow:0.4">
Specifically: uri:oozie:workflow:0.4
the xmlns value tells oozie what xml pattern to follow. I am assuming you used an online resource to build an action, which maybe in a newer scheme than what you specified.
There are versions
-uri:oozie:workflow:0.1
-uri:oozie:workflow:0.2
-uri:oozie:workflow:0.2.5
-uri:oozie:workflow:0.3
-uri:oozie:workflow:0.4
See: Oozie Workflow Schemes
But Usually setting yours to the code example above (0.4) will work for all newer workflows.
Actions also have schemes so it is important to look at what functions they have in each version.
The hive action currently goes up to 0.5 I believe, although I use 0.4 with this line:
<hive xmlns="uri:oozie:hive-action:0.4">
If this does not help, please update the question with your workflow for further help.