How to delete IBMCloud database instances with having same names - postgresql

I am having following same named instances as shown in image
Names are as follows:-
stage-tas-postgres-service
stage-tas-postgres-service
stage-tas-postgres-service
And I am tried to delete it from three dots option but since the stage environment is blocked for deletion activity.
I have referred the below link for deletionIBM Cloud Deletion DB
We have IAM identity and through which tried to delete the instance from Jenkins job
and the command I tried to delete after successful login into IAM user is as follows :-
stage("Deleting resource") {
ibmcloud "resource service-instance-delete stage-tas-postgres-service --recursive"
}
The problem is this job ends with success results, but did not delete the instance.
I am using only 3rd from all list and other two are unused show in image and in above list.
Is there any way to delete the DB from crn or deployment id
Thanks in advance.

The error says that you do not have the required permissions to delete the database. You can see and probably use that database instance, but not delete it.
It seems you are not the account owner or someone with administrator privilege. Therefore, someone else needs to delete the service.
For the future, you could set up a serviceID with the required permissions. Then, use a script which uses the serviceID for login to IBM Cloud and deletion of that service.

Related

Creating a user that's not a cloudsqlsuperuser in Cloud SQL using Terraform

I'd like to limit the privileges afforded to any given user that I create via the Google Terraform provider. By default, any user created is placed in the cloudsqlsuperuser group, and any new database created has that role/group as owner. This gives any user created via the GCP console or google_sql_user Terraform resource total control over any database that is (or was) created in a similar fashion.
So far, the best we've been able to come up with is creating and altering a user via a single-run k8s job. This seems circuitous, at best, especially given that that resource must then be manually imported later if we want to manage it via Terraform.
Is there a better way to create a user that has privileges limited to a single, application-specific database?
I was puzzled by this behaviour too. Its probably not the answer you want but if you can use GCP IAM accounts the user gets created in the PostgreSQL instance with NO roles.
There are 3 types of account you can create from "gcloud sql users create" or terraform module "google_sql_user"
"CLOUD_IAM_USER", "CLOUD_IAM_SERVICE_ACCOUNT" or "BUILT_IN"
The default is the built_in type if not specified.
CLOUD_IAM_USER and CLOUD_IAM_SERVICE_ACCOUNTS get created with NO roles.
We are using these as integration with IAM is useful in lots of ways (no managing passwords at database level is a major plus esp. when used in conjunction with SQL Auth Proxy).
BUILT_IN accounts (ie old school need a postgres username and password) for some reason are granted the "cloudsqlsuperuser" role.
In the absence of being allowed the superuser role on GCP this is about as privileged as you can get so to me (and you) seems a bizarre default.

How to delete service Association Link for Microsoft.DBforPostgreSQL/flexibleServers in Azure

How can I delete an Azure subnet after it has been associated to Microsoft.DBforPostgreSQL/flexibleServers? When I try to delete the subnet is says:
Failed to delete subnet 'db-subnet'.
Error: Subnet db-subnet is in use by application-vnet/db-subnet/db-subnet-service-association-link and cannot be deleted.
In order to delete the subnet, delete all the resources within the subnet.
See aka.ms/deletesubnet.`
When I try to delete the service association link it gives me not authorized error:
Azure Error: UnauthorizedClientApplication
Message: Unauthorized client application id xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.
I tried from bash, powershell, REST API and nothing. It does not matter if the PostgreSQL flexible server is present or deleted it gives me the same error and not I am stuck with two subnets that cannot be deleted.
Apparently it's a known issue, and it takes an Azure support request to delete it.
(Or, possibly, you can recreate the server it was used for, disconnect it, and then delete it.)
https://learn.microsoft.com/en-us/answers/questions/140197/unable-to-delete-vnet-due-to-serviceassociationlin.html
https://learn.microsoft.com/en-us/answers/questions/169500/unable-to-delete-virtual-network-and-a-resource-gr.html
https://github.com/MicrosoftDocs/azure-docs/issues/48902
https://social.msdn.microsoft.com/Forums/en-US/f3fa0fb2-d930-484c-90a5-6860e360d87f/unable-to-delete-vnet-due-to-serviceassociationlinksappservicelink?forum=WAVirtualMachinesVirtualNetwork

Is there a way to use a non-login user to run Rundeck jobs?

So my goal is to create a Rundeck job that runs on a schedule and isn't run as my personal user, or any "regular" user, but rather a bot user. Ideally this bot user wouldn't have login access and restricted permissions for security reasons, but would be able to run certain jobs. I've tried searching, but the only information I'm finding is about how to create a "regular" user in Rundeck. Even if I go down that route of creating the bot user as a "regular" user, to use it, you need to pass in either the login credentials or an API token. An API token would be fine, if it could be generated and pulled in on the fly. However, that is not the case, the API has an expiration itself. If there is something I'm missing, please let me know. I'd love to get this working.
Rundeck Version: Rundeck 3.2.1-20200113
Rundeck Cli Version: 1.1.7
You can set the following configuration in your rundeck-config.properties file (usually at /etc/rundeck/ directory):
rundeck.api.tokens.duration.max=0
This will disable your maximum period, you can see this in the official documentation here.
With that, your "bot user" can do it through API / RD CLI as you wrote.
Try using webhooks https://docs.rundeck.com/docs/manual/12-webhooks.html
You can trigger a job by making a http-request
The way I've implemented bots is as a user who is a member of a 'bot' user group, with ACLs that lock down that group as required. Any passwords required for the scheduled job are loaded into the key storage of the bot user.
With this approach you still need someone who knows the bot credentials to login as them and set passwords/SSH keys, but that's a one-off. Is that what you're trying to avoid?
The one annoying thing I've found is that a scheduled job always seems to run as the last user to edit the job - so I grant edit access to bot users and make sure to set/reset the schedule after any edit by a normal user. Hoping to address this through https://github.com/rundeck/rundeck/issues/1603, you might want to give it a 👍.

icCube - XMLA authentication/authorization not working as expected

I am trying to limit user to see only one schema over XMLA.
For that i have done:
created separate role without full access check
Created separate role without full access check
In Applications tab checked only XMLA
In Schemas tab selected "Authorize Selected" and select only one schema
Created user with just created role
applied new user definitions
After that steps, when i connect via XMLA with just created users i still see all schemas.
What i am doing wrong?
One point that is important when using XMLA interface is to disable the 'anonymous' login. When doing XMLA if this mode is activate it is going to be used in priority.
To change this you need to modify icCube.xml and restart icCube Server. See more on online doc here.

gsutil copy returning "AccessDeniedException: 403 Insufficient Permission" from GCE

I am logged in to a GCE instance via SSH. From there I would like to access the Storage with the help of a Service Account:
GCE> gcloud auth list
Credentialed accounts:
- 1234567890-compute#developer.gserviceaccount.com (active)
I first made sure that this Service account is flagged "Can edit" in the permissions of the project I am working in. I also made sure to give him the Write ACL on the bucket I would like him to copy a file:
local> gsutil acl ch -u 1234567890-compute#developer.gserviceaccount.com:W gs://mybucket
But then the following command fails:
GCE> gsutil cp test.txt gs://mybucket/logs
(I also made sure that "logs" is created under "mybucket").
The error message I get is:
Copying file://test.txt [Content-Type=text/plain]...
AccessDeniedException: 403 Insufficient Permission 0 B
What am I missing?
One other thing to look for is to make sure you set up the appropriate scopes when creating the GCE VM. Even if a VM has a service account attached, it must be assigned devstorage scopes in order to access GCS.
For example, if you had created your VM with devstorage.read_only scope, trying to write to a bucket would fail, even if your service account has permission to write to the bucket. You would need devstorage.full_control or devstorage.read_write.
See the section on Preparing an instance to use service accounts for details.
Note: the default compute service account has very limited scopes (including having read-only to GCS). This is done because the default service account has Project Editor IAM permissions. If you use any user service account this is not typically a problem since user created service accounts get all scope access by default.
After adding necessary scopes to the VM, gsutil may still be using cached credentials which don't have the new scopes. Delete ~/.gsutil before trying the gsutil commands again. (Thanks to #mndrix for pointing this out in the comments.)
You have to log in with an account that has the permissions you need for that project:
gcloud auth login
gsutil config -b
Then surf to the URL it provides,
[ CLICK Allow ]
Then copy the verification code and paste to terminal.
Stop VM
goto --> VM instance details.
in "Cloud API access scopes" select "Allow full access to all Cloud APIs" then
Click "save".
restart VM and Delete ~/.gsutil .
I have written an answer to this question since I can not post comments:
This error can also occur if you're running the gsutil command with a sudo prefix in some cases.
After you have created the bucket, go to the permissions tab and add your email and set Storage Admin permission.
Access VM instance via SSH >> run command: gcloud auth login and follow the steps.
Ref: https://groups.google.com/d/msg/gce-discussion/0L6sLRjX8kg/kP47FklzBgAJ
So I tried a bunch of things trying to copy from GCS bucket to my VM.
Hope this post helps someone.
Via SSHed connection:
and following this script:
sudo gsutil cp gs://[BUCKET_NAME]/[OBJECT_NAME] [OBJECT_DESTINATION_IN_LOCAL]
Got this error:
AccessDeniedException: 403 Access Not Configured. Please go to the Google Cloud Platform Console (https://cloud.google.com/console#/project) for your project, select APIs and Auth and enable the Google Cloud Storage JSON API.
What fixed this was following "Activating the API" section mentioned in this link -
https://cloud.google.com/storage/docs/json_api/
Once I activated the API then I authenticated myself in SSHed window via
gcloud auth login
Following authentication procedure I was finally able to download from Google Storage Bucket to my VM.
PS
I did make sure to:
Make sure that gsutils are installed on my VM instance.
Go to my bucket, go to the permissions tab and add desired service accounts and set Storage Admin permission / role.
3.Make sure my VM had proper Cloud API access scopes:
From the docs:
https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances#changeserviceaccountandscopes
You need to first stop the instance -> go to edit page -> go to "Cloud API access scopes" and choose "storage full access or read/write or whatever you need it for"
Changing the service account and access scopes for an instance If you
want to run the VM as a different identity, or you determine that the
instance needs a different set of scopes to call the required APIs,
you can change the service account and the access scopes of an
existing instance. For example, you can change access scopes to grant
access to a new API, or change an instance so that it runs as a
service account that you created, instead of the Compute Engine
Default Service Account.
To change an instance's service account and access scopes, the
instance must be temporarily stopped. To stop your instance, read the
documentation for Stopping an instance. After changing the service
account or access scopes, remember to restart the instance. Use one of
the following methods to the change service account or access scopes
of the stopped instance.
Change the permissions of bucket.
Add a user for "All User" and give "Storage Admin" access.