Error in Upgrading, Removing MySQL connector - mysql-connector

I can't upgrade or remove the MySQL connector/net 8.0.21. This is log. Does anyone know why I need to do to upgrade it?
1: Action 14:16:48: INSTALL.
1: 1: MySQL Connector Net 8.0.21 2: {B76BB4C5-40E4-4D2C-8A18-8C85C304D084}
1: Action 14:16:48: FindRelatedProducts. Searching for related applications
1: Action 14:16:48: AppSearch. Searching for installed applications
1: Action 14:16:48: LaunchConditions. Evaluating launch conditions
1: Action 14:16:48: ValidateProductID.
1: Action 14:16:48: CostInitialize. Computing space requirements
1: Action 14:16:48: FileCost. Computing space requirements
1: Action 14:16:48: CostFinalize. Computing space requirements
1: Action 14:16:48: InstallValidate. Validating install
1: Action 14:16:48: Setv45InstallUtil.
1: Action 14:16:48: InstallInitialize.
1: Action 14:16:48: RemoveExistingProducts. Removing applications
1: Application: {0160C4A1-392C-4AFA-B8DB-2471FDA71425}, Command line: UPGRADINGPRODUCTCODE={B76BB4C5-40E4-4D2C-8A18-8C85C304D084} CLIENTPROCESSID=59360 CLIENTUILEVEL=3 MSICLIENTUSESEXTERNALUI=1 REMOVE=ALL
1: The older version of MySQL Connector Net 8.0.21 cannot be removed. Contact your technical support group.
1: 1: MySQL Connector Net 8.0.21 2: {B76BB4C5-40E4-4D2C-8A18-8C85C304D084} 3: 3
1: The action 'Upgrade' for product 'Connector/NET 8.0.21' failed.

By any chance did you manually changed the installation before trying to upgrade?
There’s one bug related to that maybe could help you to resolve your issue: https://bugs.mysql.com/bug.php?id=101060
If you still can't fix your problem you can raise a bug in bugs.mysql.com, just ensure to provide more details about what steps you followed to fall in that problem,
I mean, something to help to reproduce the problem, like describe if you tried this from MySqlInstaller or if you tried from the MSI, or any other detail that you consider could affect.

Related

ERROR running force:package:version:create: Invalid character in header content ["getApiVersion"]

a. If the Dev Hub org doesn't have its "Unlocked and 2GP Packages" option enabled, you would get the error message Invalid character in header content ["getApiVersion"]. You may want to check that
b. This can be also caused due to lack of permissions on the user performing the operation. If you are using a Limited Access User Profile for performing the job and then make sure following permissions are part of your profile.
Create and Update Second-Generation Packages
Promote a package version to released
c. The error is might be also caused due to the custom code, please reach out to your internal developer to verify the same to investigate this further.
In my case it was the option c. Adding here to help the community

No trained NLU model found (Actions on Google)

We are developing boot using Actions on google SDK, we migrated our dev project UAT and all of sudden its stoped working. Previously we are using same approach and its working every time. Bot respond once for initial phrase after that it stop responding. it say Sorry, [Bot name] is not responding. Please try again later. After tracing the logs we found its sending below error. Please guide us what is wrong with our approach.
{
labels: {3}
type: "assistant_action"
}
severity: "ERROR"
textPayload: "No trained NLU model found."
timestamp: "2022-02-17T12:00:35.499117218Z"
trace: ""
}
This issue resolved now, Google resolved this issue we need to follow these steps. Open your project-> Go to Main invocation-> Edit it by adding prompt message->Save the changes-> Now NLU model trining in progress will show up at bottom, wait until its execution end. -> After that try testing your action, it will work Please see this for more

Kubeflow fails to deploy using both CLI and Console

I deleted my KF cluster last night to create a new one (using kubectl cluster command not Kfctl delete), and then when I tied to create a new one, it fails, it does not work with CLI not Console. I found other people have run into this issue before, for example (here and here)
"However, as I said even with CLI my deployment fails, the error from console is:
ailed to apply: (kubeflow.error): Code 500 with message: coordinator Apply failed for gcp: (kubeflow.error): Code 500 with message: gcp apply could not update deployment manager Error could not update storage-kubeflow.yaml; Insert deployment error: googleapi: Error 403: Request had insufficient authentication scopes.
More details:
Reason: insufficientPermissions, Message: Insufficient Permission"
and the error I get from Console is:
"Please enable APIs for your project and try again
Please enable cloud resource manager API: https://console.developers.google.com/apis/api/cloudresourcemanager.googleapis.com/ and iam API: https://console.developers.google.com/apis/api/iam.googleapis.com/"
Note that this error is wrong, all the apis are active already. I'm quite sure this is a bug of KF but not sure how to find a workaround, any thoughts?
With CLI, I'm using my own account which has "owner" privileges.
Thanks
It seems you have an issue with IAM and the installation of Kubeflow, a 3rd party product that itself is not supported by us; nevertheless I went ahead and dig some information about this Machine Learning product.
The main issues (and although it seems you already cover permissions) are permissions, number of projects and some fine grained points.
I was checking and found out the following things that may help
a) Troubleshooting Kubeflow 1
b) Deploying Kubeflow in GKE[2]
c) Kubleflow auto deployer for GKE[3]
There are also some discussion about a mismatch permissions setting in Kubeflow that may be worth reading [4]
Finally there is a group that, also on a best-effort basis due the nature of Kubeflow:"google-kubeflow-support#google.com" that may come in handy.
I trust this information will be useful for you to solve your issue

ADMG0007E: The configuration data type ConfigSynchronizationService is not valid

I'm trying to automize WebSphere Deployment process for zero down-time using steps which you can find in this link..
According to documentation's first step , we should disable "Automatic Synchronization" for each node. To automize it, I'm using script which given in documentation but when I try to apply the command below :
set syncServ [$AdminConfig list ConfigSynchronizationService $na_id]
I'm facing with an error that : ADMG0007E: The configuration data type ConfigSynchronizationService is not valid
As I checked IBM documentations, I couldn't see any resource which referred to this problem.
Does anyone have any workaround or direct solution for it?
Thanks in advance for your suggestions.
PS: I should mention that document written for zOS system but I'm trying to apply that methodology to WebSphere AS on Linux.

Why the "vmc services" command only returns one table?

In the vmc document from cloudfoundry, it says the "vmc services" command will return two tables, the first tables contains the available service types, and the second table returns the provisioned services instances. But I found with the latest version of vmc, the "vmc services" command only returns one table which contains "provisioned services". It's very in-convenient as I cannot see what kind of services the system can support.
Note: I found the very old version of vmc can list two tables.
Anyone have meet this issue?
For the latest version vmc, please use
vmc info --services
to get all the services available. Use
vmc help COMMAND
to get the usage of each command.
I believe the CF team is working on refining the document of all these stuff. They should soon make it better.