How to get a foundary service whitelist IPs - service

We have a GUI that manages Cloud Foundry, and there's a link that show an instance with IP white list external dependency (quite large) How can I easily re-create this config as JSON, and recreate to diff Foundry env ?

It's not entirely clear what is being presented in your GUI but it sounds like it might be the application security groups. You might try running cf security-groups or cf security-group <name> to see if this information matches up with what's displayed in the GUI.
If that's what you want, you can use the following API calls to obtain the JSON data & recreate it in another environment.
1.) List all the security groups: http://apidocs.cloudfoundry.org/1.40.0/security_groups/list_all_security_groups.html
2.) List security groups applied to all applications: http://apidocs.cloudfoundry.org/1.40.0/security_group_running_defaults/return_the_security_groups_used_for_running_apps.html
3.) List security groups applied to all staging containers: http://apidocs.cloudfoundry.org/1.40.0/security_group_staging_defaults/return_the_security_groups_used_for_staging.html
4.) Retrieve a particular security group: http://apidocs.cloudfoundry.org/1.40.0/security_groups/retrieve_a_particular_security_group.html
And you can find more details about the API calls here: http://apidocs.cloudfoundry.org/
You can also run the cf cli commands above with the -v flag to show the HTTP requests being made by the CLI to obtain the information that's displayed.
Hope that helps!

Related

injected db credentials change when I deploy new app version to cloud

I deploy a web app to a local cloudfoundry environment. As a database service for my DEV environment I have chosen a Marketplace service google-cloudsql-postgres with the plan postgres-db-f1-micro. Using the Web UI I created an instance with the name myapp-test-database and mentioned it in the CF Manifest:
applications:
- name: myapp-test
services:
- myapp-test-database
At first, all is fine. I can even redeploy the existing artifact. However, when I build a new version of my app and push it to CF, the injected credentials are updated and the app can no longer access the tables:
PSQLException: ERROR: permission denied for table
The tables are still there, but they're owned by the previous user. They were automatically created by the ORM in the public schema.
While the -OLD application still exists I can retrieve the old username/password from the CF Web UI or $VCAP_SERVICES and drop the tables.
Is this all because of Rolling App Deployments? But then there should be a lot of complaints.
If you are strictly doing a cf push (or restart/restage), the broker isn't involved (Cloud Controller doesn't talk to it), and service credentials won't change.
The only action through cf commands that can modify your credentials is doing an unbind followed by a bind. Many, but not all, service brokers will throw away credentials on unbind and provide new, unique credentials for a bind. This is often desirable so that you can rotate credentials if credentials are compromised.
Where this can be a problem is if you have custom scripts or cf cli plugins to implement rolling deployments. Most tools like this will use two separate application instances, which means you'll have two separate bindings and two separate sets of credentials.
If you must have one set of credentials you can use a service key to work around this. Service keys are like bindings but not associated with an application in CloudFoundry.
The downside of the service key is that it's not automatically exposed to your application, like a binding, through $VCAP_SERVICES. To workaround this, you can pass the service key creds into a user-provided service and then bind that to your application, or you can pass them into your application through other environment variables, like DB_URL.
The other option is to switch away from using scripts and cf cli plugins for blue/green deployment and to use the support that is now built into Cloud Foundry. With cf cli version 7+, cf push has a --strategy option which can be set to rolling to perform a rolling deployment. This does not create multiple application instances and so there would only ever exist one service binding and one set of credentials.
Request a static username using the extra bind parameter "username":
cf bind-service my-app-test-CANDIDATE myapp-test-database -c "{\"username\":\"myuser\"}"
With cf7+ it's possible to add parameters to the manifest:
applications:
- name: myapp-test
services:
- name: myapp-test-database
parameters: { "username": "myuser" }
https://docs.cloudfoundry.org/devguide/services/application-binding.html#arbitrary-params-binding
Note: Arbitrary parameters are not supported in app manifests in cf CLI v6.x. Arbitrary parameters are supported in app manifests in cf CLI v7.0 and later.
However, I can't find the new syntax here: https://docs.cloudfoundry.org/devguide/deploy-apps/manifest-attributes.html#services-block . The syntax I use comes from some other SO question.

How do we register a PCF Service Broker as reachable from two spaces in the same PCF Org (with org admin permissions)?

How do I register a Pivotal Cloud Foundry Service Broker to make it accessible from multiple spaces within the same Organization, if I have Org-level permissions?
We tried to register a PCF Service broker (cf create-service-broker ...) in one space, then use it as a 'service instance' (cf create-service ...) in another space.
To illustrate the problem, consider the following work flow, from a HashiCorp Vault guide:
$ cf create-space examplespace
$ cf target -s examplespace
$ cf create-service-broker vault-broker "${AUTH_USERNAME}" "${AUTH_PASSWORD}" "https://${BROKER_URL}" --space-scoped
$ cf marketplace
service plans description
hashicorp-vault shared HashiCorp Vault Service Broker
# ...
$ cf create-service hashicorp-vault shared my-vault
The above works fine. The problem comes up when we have an app in a different space that we want to consume the HashiCorp Vault API:
$ cf target -s myappspace
$ cf bind-service my-app my-vault
This last part fails.
Also, now that I'm in the space myappspace, cf marketplace does **notCC show the new service broker.
Now, we have someone on our team with org-admin permissions.
I figured that we could just register the new service broker at the org level, using enable-service-access subcommand:
https://docs.cloudfoundry.org/services/access-control.html#enable-access-to-service-plans
$ cf enable-service-access my-vault -o WebOrg
This failed as well, because, even though he had Admin permissions for the entire org, he got a permission denied error.
If we then go on to registering the service broker in the second space, myappspace, we get a
All three of these methods failed, but there has to be some way to make a service from one space available to the others, within an Org., if I have administrative permissions for that PCF Org.
How?
A similar (although more specific) type of this issue is documented in the following two github issues for PCF's cloud_controller_ng repository:
https://github.com/cloudfoundry/cloud_controller_ng/issues/935
https://github.com/cloudfoundry/cloud_controller_ng/issues/837
I've done the following research:
https://docs.cloudfoundry.org/services/managing-service-brokers.html#register-broker
https://docs.cloudfoundry.org/services/access-control.html
https://docs.cloudfoundry.org/services/access-control.html#enable-access-to-service-plans
https://starkandwayne.com/blog/register-your-own-service-broker-with-any-cloud-foundry/
(We ran variations of every command on this page.)
The most similar of the existing questions on Stack Overflow were these:
WebSphere Message Broker - how to send a PCF message
Need help on Registering App on PCF with Spring Cloud Data Flow which is also on PCF
They don't seem to have much to do with name spacing issues in the PCF marketplace, or with PCF permissions management.
Note: At first I wanted to post this to serverfault.com, because this has more to do with the infrastructure for an application, rather than just programming. But, while serverfault.com has no tag for Pivotal Cloud Foundry, Stack Overflow has a pivotal-cloud-foundry tag with 588 uses, already.
How do I register a Pivotal Cloud Foundry Service Broker to make it accessible from multiple spaces within the same Organization, if I have Org-level permissions?
I don't think you can do this. You'd need to be a platform admin/operator. Then you'd need to register the service broker with the platform & mark that broker as accessible to select orgs & spaces. You could then create services instances & if the broker permits share them across spaces.
If you only have org/space permissions, you can only register the service broker with a specific space. It's then only visible in that space.
Without platform admin/operator permissions, I think the best you could do would be this:
register the broker in a specific space
create a service instance in that space
bind that to your apps in this space
create a service key for your app in the second space
switch to the second space
create a user provided service in that space and enter the service key info
Repeat steps 4-6 for each app in the second service (this ensure you get unique credentials per app, you could use one service key for all apps if you don't care about this).
Happy to be corrected, but I think that is the state of things as I write this.
Assuming you are using PCF 2.1 or above.
Service brokers must explicitly enable service instance sharing by setting a flag in their service-level metadata object. This allows service instances, of any service plan, to be shared across orgs and spaces.
This is from Enabling Service Instance Sharing
Looks like you have already followed the rest of steps from Sharing Service Intances

Azure Portal Deployment Options inaccessible with custom roles

I have a few websites running in Azure and bitbucket repositories are connected to the test-slots. I'm trying to give the developer access to the Deployment Options (and Log) using the Azure portal without him being able to do anything destructive to the Web Application itself.
I've found an article describing how to create custom roles, but whatever I try, I cannot give the developer readonly access to the Web App and still allow him to access the Deployment Options: both the Deployment Options and Continous Delivery (Preview) are greyed out.
What I've done is create a new role based on the existing "Website Contributor" role (because that one does show the Web App's Deplyment Options) and changed the microsoft.web/sites/* to read-permissions:
Microsoft.Authorization/*/read
Microsoft.Insights/alertRules/*
Microsoft.Insights/components/*
Microsoft.ResourceHealth/availabilityStatuses/read
Microsoft.Resources/deployments/*
Microsoft.Resources/subscriptions/resourceGroups/read
Microsoft.Support/*
Microsoft.Web/certificates/*
Microsoft.Web/listSitesAssignedToHostName/read
Microsoft.Web/serverFarms/read
Microsoft.Web/sites/read
Microsoft.Web/sites/*/read
Microsoft.Web/sites/slots/read
Microsoft.Web/sites/slots/*/read
However, it only works when I replace the last 4 lines with this
Microsoft.Web/sites/*
But here lies the problem: I do not want give the developer full access to the Web Apps. The thing that drives me crazy is that even if I query all actions for this resource provider using powershell
Get-AzureRMProviderOperation "Microsoft.Web/sites/*" | FT OperationName, Operation , Description -AutoSize
And if I add all these individually instead of Microsoft.Web/sites/*, then it still doesn't show the deployment options and continous delivery.
Does anyone know why I need to give full access to the sites or how I can add readonly access to the site and still get access to the deployment options?

How do I log in to kubernetes-cockpit UI if .kube/config contains a token instead of an account?

Numerous forum posts and documentations specify extracting login info for the Kubernetes install from ~/.kube/config.
The problem I found: mine doesn't have a proper user account, it specifies a name and a token.
How do I get the account name so I can use the kubernetes-cockpit UI? Surprisingly there appears to be nothing on that topic - what to do if the config doesn't contain an account.
It depends on how you use Cockpit.
According to cockpit official page:
Used in a standard cockpit session:
If a user is able to use kubectl successfully when at their shell terminal, then that same user will able to use Kubernetes dashboard when logged into Cockpit
I suppose this is your scenario, so if you didn't change default settings, the cockpit will look for .kube/config itself, i.e. you should be able to login without specifying your account.

Google Cloud Platform: Logging in to GCP from commandline

I was sure it will be simple but couldn't find any documentation or resolution.
I'm trying to write a script using gcloud to perform some operations in my GCP instances.
Is there anyway to login/authenticate using gcloud via command line only?
Thanks
You have a couple of options here (depending on what exactly you're trying to do).
The first option is to log in using the --no-launch-browser option. This still requires interaction from a human user, but doesn't require a browser on the machine you're using:
> gcloud auth login --no-launch-browser
Go to the following link in your browser:
https://accounts.google.com/o/oauth2/auth?redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&prompt=select_account&response_type=code&client_id=32555940559.apps.googleusercontent.com&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fappengine.admin+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcompute&access_type=offline
Enter verification code: *********************************************
Saved Application Default Credentials.
You are now logged in as [user#example.com].
Your current project is [None]. You can change this setting by running:
$ gcloud config set project PROJECT_ID
The non-interactive option involves service accounts. The linked documentation explains them better than I can, but the short version of what you need to do is as follows:
Create a service account in the Google Developers Console. Make sure it has the appropriate "scopes" (these are permissions that determine what this service account can do. Download the corresponding JSON key file.
Run gcloud auth activate-service-account --key-file <path to key file>.
Note that Google Compute Engine VMs come with a slightly-different service account; the difference is described here.