I want to make use of AWS SSO and integrate it to work with G suite.
I followed the official blog post - https://aws.amazon.com/blogs/security/how-to-use-g-suite-as-external-identity-provider-aws-sso/
However, I'm unable to perform the user synchronization from G suite into AWS SSO via the mentioned ssosync project - https://github.com/awslabs/ssosync. There's an open issue regarding the fact that ssosync is no longer available in AWS Serverless Application Repository. I've tried to clone and build the project manually but I get a 404 error and I can't find a reason why.
I am also unable to find a way to create users/groups programmatically (didn't find anything useful in AWS SSO API reference) in AWS SSO.
Has anyone encountered this problem as well?
I think that does not work anymore. What about using this one instead?
https://github.com/awslabs/ssosync was updated to V.2.0.0 few days ago (Dec 2022).
I installed it from AWS Serverless Application Repository and it seems to work.
It requires that you configure every possible variable before successful execution. For variables that you don't wish to use, put *.
Related
When one creates an ASP.NET Core Web API in Visual Studio 2022, and tests it locally, one gets a convenient Swagger page built upon an OpenAPI definition, to test all HTTP endpoints.
However, when deployed and trying to access {path-to-api}/swagger, it returns a 404 Not Found error, even while on localhost, when both the API and the database is sitting on my own machine. Even if the database is in the Azure cloud, for that matter, it also works, if I put the Azure SQL Database connection string into appsettings.json.
So is there a way to achieve this, preferably without too much hassle? Or am I wrong in wanting this, do developers mostly test their APIs locally? Because I want the Swagger API online only for testing.
The problem is getting and using the swagger functionality into the cloud. Is it possible and good practice?
If you look at the startup, you will notice that the swagger is only loaded during a development session via an if check. Commenting that out, or expanding it based on evironment, will allow a published version to generate the page on the target host.
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
}
I generally do that for first publishes or to Dev/Test environments to see it running. Once it is not needed, I un-comment it back in.
Also it may be actually viable (turned on) in Dev or UAT server because one is also publishing the open api it to APIM (Azure api manager), which takes the api and generates its own development environment; away from an initial publish.
Also once published, it is not the default page, one still has to path to it such as .../swagger/index.html.
I'm aborting this mission to deploy the Swagger interface to Azure along with my API. It's bad security practice to make the HTTP request methods so visually available to all. So the answer to my question do developers mostly test their APIs locally, is apparently yes.
I wondered if I should remove the question, but I would like to make it still stand, in case anyone else is contemplating about doing the same thing - exposing an API online with the Swagger UI.
I recently deployed a Next.js application for a software engineering boot camp. I am using Vercel for hosting the web app. The problem I am having has been spoken about on the internet before. However, I couldn't find much helpful information.
When I look at the real-time logs for my application from my Vercel dashboard, a 504 error gets thrown for multiple API routes I have created. I am aware that Vercel places restrictions on requests depending on the hosting plan someone subscribes to. However, I can't help but wonder if I have overlooked an important step when deploying my application.
When deploying my application, I did the following things:
Connected a session store to my MongoDB database.
Created a password-protected MongoDB Atlas account (credentials are environment variables).
White-listed all IP addresses so that any user can interact with their portion of the database.
I would appreciate help finding out if these errors are my fault and if there is anything I can do about them or if they are solely caused by the restrictions of the "Hobby" plan.
Thank you very much in advance,
-Sam
Screen Shot:
I had a similar issue, turned out, it was just the fact that I did not add the vercel ip addr to the network access page on mongodb so momgodb was blocking vercel from accessing data.
You need to integrate MongoDB in your project on Vercel.
Go to your project settings in Vercel and go to the Integrations tab. Click the Browse Marketplace button and find MongoDb. Click Add Integration button and follow the instructions.
Hope this helps... I know this was asked quite a while ago.
You have to troubleshoot the issue by doing the following
Eliminate the NextJS app out of the equation - Using postman https://www.postman.com/downloads/ - Confirm the output of your API, what is the time the API takes? Given function invocation has a limit, you need to optimize the API to meet the threshold.
If the API times are fine and resolution occurs outside of your app, the next step is to troubleshoot the API route, remove the DB parts and just echo back a success message and check the function invocation time.
If #2 turns out to be the issue, reach out to vercel support - Another option could be hosting it outside and whitelisting the cross domain API ask from your application.
I am trying to use KubeFlow on GCP and I am following this codelab, but "click-to-deploy" is no longer supported so I followed the documentation of "kubectl and kpt". However, I keep getting this "You cannot perform this action because the Cloud SDK component manager is disabled for this installation." error and none of the solutions I found worked. I have 2 other friends told me they tried to make KubeFlow work since last year, it never worked, but I did see people post question about KubeFlow on Stackoverflow still, so I want to ask if it is still working, if so, where can I find a decent guide to follow?
Thanks!
I finally got it working. For that error message, it turned out that I just didn't install the Cloud SDK properly. There will be a lot of other issues too down the road, but at least the KubeFlow web UI is working for me now.
yes, as the kubectl and kpt says, the first step in getting prepared to install cluster is installing gcloud that is CLI that manages authentication, local configuration, developer workflow, interactions with Google Cloud APIs.
Without is you simply cant work with objects(in your case you need to enable kpt anthoscli beta) and perform tasks like
creating a Compute Engine VM instance, managing a Google Kubernetes
Engine cluster, and deploying an App Engine application, either from
the command line or in scripts and other automations..
I am in the process of migrating Atlassian Confluence from on-prem to Kubernetes. I found the official docker image for confluence and was able to spin up the application. I need to configure ssl and i already have the key and certificate. I tried to import the certificates and restarted the server.xml and it is not working. Has any worked on confluence migration from on-prem to kubernetes/docker and if any can provide a link/experience related to the same, it would be helpful.
Regards,
John
It's certainly possible, the healthcheck might be tricky and the reason for that is there is no automated install as far as I'm aware when it becomes live, meaning there will always been a manual configuration stage.
You're best looking at some package manager examples for this, which for Kubernetes is Helm. This allows you to iterate and rollback quickly.
Have a look at this example) which is for Jira, but the same flow should apply. Confluence and Jira are heavily related, so it should be relevant.
I have my code in Git repository. I am using UDeploy to deploy my code into MarkLogic environment. I can able to move all my modules successfully but facing two problems
1. Creating New indexes
2. REST endpoint creation
Please let me know if there is anyway to implement these two
For creating indexes, I have tried to do it using API functions(admin:database-range-element-index()) and I have successful in that part. But is there any way to do it from UDeploy or DevOps.
For register REST endpoint I couldn't able to find anyway to try.
Have you looked at MarkLogic's REST Management APIs - https://docs.marklogic.com/REST/management. In particular, see if https://docs.marklogic.com/REST/POST/manage/v2/databases will help you create indexes via REST management APIs.
The most common way to deploy MarkLogic code & configuration is ml-gradle, a plugin to the widely used gradle tool. ml-gradle uses MarkLogic's Management API, mentioned by Ganesh, and is scriptable.