How to connect IKS environment using external Rundeck? - ibm-cloud

I'm using external Rundeck present outside IKS usually we login to ibmcloud and then access IKS. But I need to access IKS environment with service account (cert & token). Is that possible?
if yes how I can store this kubeconfig temporarily without storing it in .kube/config

Create a 'Service ID' and within the service id create an 'API Key'
- Login to the ibmcloud console and choose Manage > Access (IAM)
- Create a Service ID
- Add the Access polices, by clicking the associated panel and then add policies
- Choose the API keys panel and click on create
I do not know what you mean to 'store temporarily'. But then the script can login using this api key in your script and config kubectl:
ibmcloud login --aipkey 6JaR7NOTAREALKEYPc-E01i-mlwc7_8zd29foobar2NA -g yourgroup
ibmcloud ks cluster config --cluster yourcluster
kubectl ...

Related

IBM Cloud: How to bind Db2 Warehouse to Code Engine app?

I have an existing instance of Db2 Warehouse on Cloud which is deployed to an org and space. Now, I would like to bind that service to an app for deployment with IBM Cloud Code Engine.
ibmcloud ce application bind --name henriks-app --service-instance myDb2
myDb2 does not exist as IAM resource because it is a CF resource. How would I bind the two together? It seems that I would need to create some form of custom wrapper.
The best way to manually connect services to your Code Engine application is to add service credentials to a Code Engine secret, and then attach that secret to your application using environment variables or volume mounting.
While you're correct that Db2 Warehouse isn't a typical IAM-Enabled service type, based on the IBM Cloud Db2 Warehouse docs, it's possible to create a client connection with Db2 Warehouse using an IAM Service ID & API Key.
Here's how I'd "bind" the Db2 instance to a Code Engine app:
Create a new service ID from the IAM Service IDs page
Under "Assign Access" > "Access service ID additional access" > "IAM Service", you'll find "Db2 Warehouse" as an option, and you can configure exact permissions from there (e.g. which instance(s) to grant permissions to, which roles, etc)
Finish the configuration by clicking "Assign access"
Using the CLI, log in to your account and generate a new API Key, e.g. ibmcloud iam service-api-key-create mydb2key SERVICE_ID_NAME --output JSON > mydb2.json where SERVICE_ID_NAME is the name of the service ID created in Step 1
Target your Code Engine project, then create a new secret using the API Key JSON, e.g. ibmcloud ce secret create --name mydb2 --from-file MYDB2=mydb2.json
Attach the secret to your application as an environment variable, e.g. ibmcloud ce app update --name myapp --env-from-secret mydb2
After the app update goes through, your application will have access to an environment variable named MYDB2, which will have the value of a JSON object string containing your API Key.
You'll find more information about creating secrets and using secrets with applications in the Code Engine docs.

How to pass a role to cli command "aws cloudformation deploy" or "sam deploy"?

I am creating a cloudformation stack using a SAM template and the CLI. I have successfully done this using an account that gets all the required permissions from policies directly attached to it. It's poor security practice to give this account all these permissions so I've created a role with the same policies attached and want to use that for deployment instead. However, even though I pass my role through the --role-arn parameter the command is still looking to the account for the permissions.
Here are the commands I've tried using:
aws cloudformation deploy --template-file TemplatePackaged.yaml --stack-name TestStack --capabilities CAPABILITY_IAM --region us-east-1 --role-arn arn:aws:iam::666488004797:role/LambdaApplicationCreateRole
or
sam deploy --template-file TemplatePackaged.yaml --stack-name TestStack --capabilities CAPABILITY_IAM --region us-east-1 --role-arn arn:aws:iam::666488004797:role/LambdaApplicationCreateRole
Unless the user logged into the cli has the required permissions I get the error with either command:
An error occurred (AccessDenied) when calling the DescribeStacks
operation: User: arn:aws:iam::666488004797:user/DummyUser1 is not
authorized to perform: cloudformation:DescribeStacks on resource:
arn:aws:cloudformation:us-east-1:666488004797:stack/Melissa/*
How do I get the deploy command to use the role passed in the --role-arn parameter to get the permissions it needs?
After a lot of reading and trial and error I found that Manoj's answer is correct, but the tricky part is the argument that one needs to pass as xyz in his answer. Here is what I had to in order to pass a role:
I had to configure the role that I wanted to pass on the AWS CLI's config file as a profile. The parameter --profile that Manoj mentioned only works with profiles configured in this file (to the best of my knowledge). The way to configure a role as a profile is using the command:
aws configure --profile arbitraryName
What follows after profile is just a label or variable that you will use to refer to your role when you want to pass it, you can give it any name but ideally you would name it the same as the role it will hold. Running this command will prompt you for a couple of fields. As far as I know roles don't have access_key or secret_access_key so just hit enter to skip these as well as the region and output, you don't need those for your role. Next you will set fields that roles actually need by using these commands:
aws configure set profile.arbitraryName.role_arn roleArn
aws configure set profile.arbitraryName.source_profile cliProfile
The roleArn is the arn of the role you are configuring into the CLI,the cliProfile is a user already configured in the CLI that has rights to assume the role. Once this is done, whenever you want to pass the configured role in a command you just need to add --profile arbitraryName as the last parameter of your command and the command will use permissions from the role that was passed.
*Interesting to know, passing a role this way does an implicit aws sts assume-role. If you know where your .aws folder is you can go in and see a folder named cli, which contains a json file with the temporary credentials that are created when a role is assumed.
I had to do a lot of reading to figure this out, I hope this answer will save someone else some time.
there could be multiple approaches.
Assume the role and use profile for deploying aws cloudformation
aws cloudformation deploy --template-file TemplatePackaged.yaml --stack-name TestStack --profile xyz
Launch an EC2 instance with an instance profile which is having access to cloudformation, you don't have to explicitly specify role arn or profile details
aws cloudformation deploy --template-file TemplatePackaged.yaml --stack-name TestStack

Gcloud auth for all users on a server

I am trying to setup a Gcloud Auth Login for an account on a server that will cover all users.
i.e.
I login using an administrator account and issue the command..
e.g.
gcloud auth login auser#anemail.com
go through the steps required and when I issue the issue the Gcloud Auth List command I get the right result.
But other users cannot see it.
i.e. we use sap data services that use a proxy account on the server when it is running
e.g.
proxyaccount#mail.com
but that user cannot see the the authorized user I authorized using the administrator account.
I get error "you do not currently have an active account selected"
The "other" accounts do not have administration access nor do we want them to, and besides I don't want to have to go through this process for each and every account that connects to the server.
Ian
Each user gets its own gcloud configuration folder. You can see which configuration folder is used by gcloud by running gcloud info.
Note that if your server is a VM on GCP you do not need to configure credentials as they are obtained from metadata server for the VM.
Sharing user credentials is not a good practice. If you need to do this your users can set CLOUDSDK_CONFIG environment variable to point to one shared configuration folder. Also you should at least use service account for this purpose and activate it via gcloud auth activate-service-account instead of using credentials obtained via gcloud auth login.

How can I create a signed URL for Google Cloud Storage with a project level service account?

For every Google Compute instance, there is a default service account like this:
1234567890123-compute#developer.gserviceaccount.com
I can create my instance with the proper scope (i.e. https://www.googleapis.com/auth/devstorage.full_control) and use this account to make API requests.
On this page: https://cloud.google.com/storage/docs/authentication#service_accounts it says:
Every project has a service account associated with it, which may be used for authentication and to enable advanced features such as Signed URLs and browser uploads using POST.
This implies that I can use this service account to created Signed URLs. However, I have no idea how to create a signed URL with this service account since I can't seem to get the private key (.p12 file) associated with this account.
I can create a new, separate service account from the developer console, and that has the option of downloading a .p12 file for signing, but the project level service accounts do not appear under the "APIs and auth / Credentials" section. I can see them under "Project / Permissions", but I can't do anything with them there.
Am I missing some other way to retrieve the private key for these default accounts, or is there no way to sign urls when using them?
You can use p12 key of any of your service account while you're authenticated through your main account or a GCE service account or other services accounts that have appropriate permissions on the bucket and the file.
In this case, just create a service account download p12 key and use the following command to sign your URL:
$ gsutil signurl -d 10m privatekey.p12 gs://bucket/foo
Though you can authenticate using different service account using the following command:
gcloud auth activate-service-account service-account-email --key-file key.p12
You can list and switch your accounts using these commands:
$ gcloud auth list
$ gcloud config set account

gsutil copy returning "AccessDeniedException: 403 Insufficient Permission" from GCE

I am logged in to a GCE instance via SSH. From there I would like to access the Storage with the help of a Service Account:
GCE> gcloud auth list
Credentialed accounts:
- 1234567890-compute#developer.gserviceaccount.com (active)
I first made sure that this Service account is flagged "Can edit" in the permissions of the project I am working in. I also made sure to give him the Write ACL on the bucket I would like him to copy a file:
local> gsutil acl ch -u 1234567890-compute#developer.gserviceaccount.com:W gs://mybucket
But then the following command fails:
GCE> gsutil cp test.txt gs://mybucket/logs
(I also made sure that "logs" is created under "mybucket").
The error message I get is:
Copying file://test.txt [Content-Type=text/plain]...
AccessDeniedException: 403 Insufficient Permission 0 B
What am I missing?
One other thing to look for is to make sure you set up the appropriate scopes when creating the GCE VM. Even if a VM has a service account attached, it must be assigned devstorage scopes in order to access GCS.
For example, if you had created your VM with devstorage.read_only scope, trying to write to a bucket would fail, even if your service account has permission to write to the bucket. You would need devstorage.full_control or devstorage.read_write.
See the section on Preparing an instance to use service accounts for details.
Note: the default compute service account has very limited scopes (including having read-only to GCS). This is done because the default service account has Project Editor IAM permissions. If you use any user service account this is not typically a problem since user created service accounts get all scope access by default.
After adding necessary scopes to the VM, gsutil may still be using cached credentials which don't have the new scopes. Delete ~/.gsutil before trying the gsutil commands again. (Thanks to #mndrix for pointing this out in the comments.)
You have to log in with an account that has the permissions you need for that project:
gcloud auth login
gsutil config -b
Then surf to the URL it provides,
[ CLICK Allow ]
Then copy the verification code and paste to terminal.
Stop VM
goto --> VM instance details.
in "Cloud API access scopes" select "Allow full access to all Cloud APIs" then
Click "save".
restart VM and Delete ~/.gsutil .
I have written an answer to this question since I can not post comments:
This error can also occur if you're running the gsutil command with a sudo prefix in some cases.
After you have created the bucket, go to the permissions tab and add your email and set Storage Admin permission.
Access VM instance via SSH >> run command: gcloud auth login and follow the steps.
Ref: https://groups.google.com/d/msg/gce-discussion/0L6sLRjX8kg/kP47FklzBgAJ
So I tried a bunch of things trying to copy from GCS bucket to my VM.
Hope this post helps someone.
Via SSHed connection:
and following this script:
sudo gsutil cp gs://[BUCKET_NAME]/[OBJECT_NAME] [OBJECT_DESTINATION_IN_LOCAL]
Got this error:
AccessDeniedException: 403 Access Not Configured. Please go to the Google Cloud Platform Console (https://cloud.google.com/console#/project) for your project, select APIs and Auth and enable the Google Cloud Storage JSON API.
What fixed this was following "Activating the API" section mentioned in this link -
https://cloud.google.com/storage/docs/json_api/
Once I activated the API then I authenticated myself in SSHed window via
gcloud auth login
Following authentication procedure I was finally able to download from Google Storage Bucket to my VM.
PS
I did make sure to:
Make sure that gsutils are installed on my VM instance.
Go to my bucket, go to the permissions tab and add desired service accounts and set Storage Admin permission / role.
3.Make sure my VM had proper Cloud API access scopes:
From the docs:
https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances#changeserviceaccountandscopes
You need to first stop the instance -> go to edit page -> go to "Cloud API access scopes" and choose "storage full access or read/write or whatever you need it for"
Changing the service account and access scopes for an instance If you
want to run the VM as a different identity, or you determine that the
instance needs a different set of scopes to call the required APIs,
you can change the service account and the access scopes of an
existing instance. For example, you can change access scopes to grant
access to a new API, or change an instance so that it runs as a
service account that you created, instead of the Compute Engine
Default Service Account.
To change an instance's service account and access scopes, the
instance must be temporarily stopped. To stop your instance, read the
documentation for Stopping an instance. After changing the service
account or access scopes, remember to restart the instance. Use one of
the following methods to the change service account or access scopes
of the stopped instance.
Change the permissions of bucket.
Add a user for "All User" and give "Storage Admin" access.