Unusable Artifactory API keys after upgrade from 6.11.1 to 7.33.12 - upgrade

After an upgrade carried out on our Artifactory installation (from 6.11.1 to 7.33.12), none of the users were able to use their pre-upgrade generated API keys.
We had to re-create the master.key file using the guide at this link due to a key mismatch in the cluster join phase during the upgrade.
I can't find any documentation or support requests on this kind of problem. Is this a common issue with this new version? Or something we are able to retrieve in any way?
Are there any other ways to regenerate all the API keys for all users?

Related

AWS SSO integration with G suite

I want to make use of AWS SSO and integrate it to work with G suite.
I followed the official blog post - https://aws.amazon.com/blogs/security/how-to-use-g-suite-as-external-identity-provider-aws-sso/
However, I'm unable to perform the user synchronization from G suite into AWS SSO via the mentioned ssosync project - https://github.com/awslabs/ssosync. There's an open issue regarding the fact that ssosync is no longer available in AWS Serverless Application Repository. I've tried to clone and build the project manually but I get a 404 error and I can't find a reason why.
I am also unable to find a way to create users/groups programmatically (didn't find anything useful in AWS SSO API reference) in AWS SSO.
Has anyone encountered this problem as well?
I think that does not work anymore. What about using this one instead?
https://github.com/awslabs/ssosync was updated to V.2.0.0 few days ago (Dec 2022).
I installed it from AWS Serverless Application Repository and it seems to work.
It requires that you configure every possible variable before successful execution. For variables that you don't wish to use, put *.

Migrating Atlassian Confluence to Kuberntes

I am in the process of migrating Atlassian Confluence from on-prem to Kubernetes. I found the official docker image for confluence and was able to spin up the application. I need to configure ssl and i already have the key and certificate. I tried to import the certificates and restarted the server.xml and it is not working. Has any worked on confluence migration from on-prem to kubernetes/docker and if any can provide a link/experience related to the same, it would be helpful.
Regards,
John
It's certainly possible, the healthcheck might be tricky and the reason for that is there is no automated install as far as I'm aware when it becomes live, meaning there will always been a manual configuration stage.
You're best looking at some package manager examples for this, which for Kubernetes is Helm. This allows you to iterate and rollback quickly.
Have a look at this example) which is for Jira, but the same flow should apply. Confluence and Jira are heavily related, so it should be relevant.

REST endpoint registration and bootstrap(Creating range-index) using U Deploy

I have my code in Git repository. I am using UDeploy to deploy my code into MarkLogic environment. I can able to move all my modules successfully but facing two problems
1. Creating New indexes
2. REST endpoint creation
Please let me know if there is anyway to implement these two
For creating indexes, I have tried to do it using API functions(admin:database-range-element-index()) and I have successful in that part. But is there any way to do it from UDeploy or DevOps.
For register REST endpoint I couldn't able to find anyway to try.
Have you looked at MarkLogic's REST Management APIs - https://docs.marklogic.com/REST/management. In particular, see if https://docs.marklogic.com/REST/POST/manage/v2/databases will help you create indexes via REST management APIs.
The most common way to deploy MarkLogic code & configuration is ml-gradle, a plugin to the widely used gradle tool. ml-gradle uses MarkLogic's Management API, mentioned by Ganesh, and is scriptable.

Deployment synchronization Issue - API Manager 1.8

I have clusted the WSO2 API Manager 1.8 and implemented deployment synchronization according to this guide which is given in the WSO2 documentaion. Everything happens to work find except one thing.
Let's assume that we have below 2 instances running API Manager.
192.168.X.123 - API Manager 1
192.168.X.124 - API Manager 2
The problem is once I create and publish an API on API Manager 1 it does appear on API Manager 2 in the publisher. But the particular API does not appear on the API Manager 2 Store.
Also note that I'm using a shared MySQL database for the API Manager cluster(API Manager 1 and 2). I checked the logs but does not contains any error.
How can I fix this?
Please look at WSO2 Clustering and Deployment Guide. Please check the docs on Clustering API Manager and SVN-based Deployment Synchronizer
If all configurations are correct, your API should be displayed in Store once you have "Published" the API. It might take some time (may be few minutes) to appear in Store due to indexing etc.
In our situation, we defined 2 different servers with extra CPU and
memory, on these servers we have installed the full WSO2 API Manager
and defined the cluster configuration. Everything provisioned via
Puppet.
Just a straightforward install, all data-source pointing to one schema
in an Oracle database.
And...it is working; Our Developers happy, Operations happy, Architect
department happy
From WSO2 API Manager Clustering configuration

Azure deployment versions

I will try to make it simplify. I am using windows azure cloud to host our web services and databases. and these web services are accessible via URL: "https://server.mydomain.com"
now we made a few major changes to our model and hence web services as a whole. This breaks the API interface for older users. Now we want to deploy the latest version on URL: "https://server.mydomain.com/v2" so that old users can still access the older version.
I searched around SO and other resources but i couldnt find a definite answer how to deploy new version without messing up the old version.
Anything in right direction will be helpful.
In one of the projects I was working on, we built in a versioning scheme on top of our Web API. We used this tutorial to get started. I would recommend starting there.
Sorry for the generic answer, if you post some more specifics I will make some updates.
I'd suggest to deploy separate cloud service and use "v2.server.mydomain.com"