Creating multiple users in Bluemix Biginsight to test Knox service - ibm-cloud

I have created a space and a BigInsight cluster on Bluemix. In order to test Knox, I need multiple users for authentication. Is it possible to create users in Bluemix Biginsight service? The ID that is provided to access the cluster does not have root access. Also, it would be helpful if someone can explain in detail how the admin-related task(adding more components like Hue,Drill using yum commands) could be performed in Bluemix Biginsights service. Thanks in advance.

I am guessing here that you have created a Bluemix Biginsights Basic (Beta) plan.
This service is a single user service and cannot have multiple users.
In addition this service is a managed service and installation of software by the user is not allowed.
This service comes with preconfigured settings and pre-installed softwares that is fixed. If you do need something apart from this, I would suggest to open a Biginsights service ticket through Bluemix Support page with a request for it and why you need the software.
The product management team will look at it and see if they can be preinstalled in the future release.
These installations will not be done on a any Basic (Beta) plan for individual clusters.

Related

Cannot deploy Kubeflow on GCP: tells me to enable APIs that are already enabled

I am trying to install Kubeflow on Google Cloud Platform (GCP) and Kubernetes Engine (GKE), following the GCP deployment guide.
I created a GCP project of which I am the owner, I enabled billing, set up OAuth credentials and enabled the following APIs:
Compute Engine API
Kubernetes Engine API
Identity and Access Management (IAM) API
Deployment Manager API
Cloud Resource Manager API
Cloud Filestore API
AI Platform Training & Prediction API
However, when I want to deploy Kubeflow using the UI, I get the following error:
So I doublechecked and those APIs are already enabled:
The log messages at the bottom of the screen are:
2020-03-0614:14:04.629: Getting enabled services for project <projectname>..
2020-03-0614:14:16.909: Could not configure communication with GCP, exiting
The Could not configure communication with GCP, exiting is triggered when _enableGcpServices() fails.
The line Getting enabled services for project ... is printed but not the line Proceeding with project number: ..., so the error must be triggered somewhere in the block of code between those lines.
The call to Gapi.cloudresourcemanager.getProjectNumber(project) has its own try/catch with a slightly different error message and title (only talks about the cloud resource manager API, not the IAM API), so I assume it is the call to Gapi.getSignedInEmail() that fails??
I'd suggest having a look at the service management API, IAM service credentials API and cloud identity aware proxy API possibly. I've only used the CLI install tool previously and not run into these problems, but you might require these services for the IAP deployment?
I faced the same issue and was able to solve by correcting the project id.
Make sure that the project id on the UI form is specified correctly as it is on the GCP project - and that it does not have any leading or trailing spaces if you copy pasted from the GCP project details like I did.
I had the same issue. I was using in trial. Seems they allow a limited project to use billing account at same time. So I shut down unused ones . Went to Billing-->my projects. Disabled unused with 3 dots. Then tried to enable the billing account for current project. It worked.

How do we register a PCF Service Broker as reachable from two spaces in the same PCF Org (with org admin permissions)?

How do I register a Pivotal Cloud Foundry Service Broker to make it accessible from multiple spaces within the same Organization, if I have Org-level permissions?
We tried to register a PCF Service broker (cf create-service-broker ...) in one space, then use it as a 'service instance' (cf create-service ...) in another space.
To illustrate the problem, consider the following work flow, from a HashiCorp Vault guide:
$ cf create-space examplespace
$ cf target -s examplespace
$ cf create-service-broker vault-broker "${AUTH_USERNAME}" "${AUTH_PASSWORD}" "https://${BROKER_URL}" --space-scoped
$ cf marketplace
service plans description
hashicorp-vault shared HashiCorp Vault Service Broker
# ...
$ cf create-service hashicorp-vault shared my-vault
The above works fine. The problem comes up when we have an app in a different space that we want to consume the HashiCorp Vault API:
$ cf target -s myappspace
$ cf bind-service my-app my-vault
This last part fails.
Also, now that I'm in the space myappspace, cf marketplace does **notCC show the new service broker.
Now, we have someone on our team with org-admin permissions.
I figured that we could just register the new service broker at the org level, using enable-service-access subcommand:
https://docs.cloudfoundry.org/services/access-control.html#enable-access-to-service-plans
$ cf enable-service-access my-vault -o WebOrg
This failed as well, because, even though he had Admin permissions for the entire org, he got a permission denied error.
If we then go on to registering the service broker in the second space, myappspace, we get a
All three of these methods failed, but there has to be some way to make a service from one space available to the others, within an Org., if I have administrative permissions for that PCF Org.
How?
A similar (although more specific) type of this issue is documented in the following two github issues for PCF's cloud_controller_ng repository:
https://github.com/cloudfoundry/cloud_controller_ng/issues/935
https://github.com/cloudfoundry/cloud_controller_ng/issues/837
I've done the following research:
https://docs.cloudfoundry.org/services/managing-service-brokers.html#register-broker
https://docs.cloudfoundry.org/services/access-control.html
https://docs.cloudfoundry.org/services/access-control.html#enable-access-to-service-plans
https://starkandwayne.com/blog/register-your-own-service-broker-with-any-cloud-foundry/
(We ran variations of every command on this page.)
The most similar of the existing questions on Stack Overflow were these:
WebSphere Message Broker - how to send a PCF message
Need help on Registering App on PCF with Spring Cloud Data Flow which is also on PCF
They don't seem to have much to do with name spacing issues in the PCF marketplace, or with PCF permissions management.
Note: At first I wanted to post this to serverfault.com, because this has more to do with the infrastructure for an application, rather than just programming. But, while serverfault.com has no tag for Pivotal Cloud Foundry, Stack Overflow has a pivotal-cloud-foundry tag with 588 uses, already.
How do I register a Pivotal Cloud Foundry Service Broker to make it accessible from multiple spaces within the same Organization, if I have Org-level permissions?
I don't think you can do this. You'd need to be a platform admin/operator. Then you'd need to register the service broker with the platform & mark that broker as accessible to select orgs & spaces. You could then create services instances & if the broker permits share them across spaces.
If you only have org/space permissions, you can only register the service broker with a specific space. It's then only visible in that space.
Without platform admin/operator permissions, I think the best you could do would be this:
register the broker in a specific space
create a service instance in that space
bind that to your apps in this space
create a service key for your app in the second space
switch to the second space
create a user provided service in that space and enter the service key info
Repeat steps 4-6 for each app in the second service (this ensure you get unique credentials per app, you could use one service key for all apps if you don't care about this).
Happy to be corrected, but I think that is the state of things as I write this.
Assuming you are using PCF 2.1 or above.
Service brokers must explicitly enable service instance sharing by setting a flag in their service-level metadata object. This allows service instances, of any service plan, to be shared across orgs and spaces.
This is from Enabling Service Instance Sharing
Looks like you have already followed the rest of steps from Sharing Service Intances

Unable to create Kubernetes Cluster on IBM Bluemix

I have been trying to create a Kubernetes Cluster with my Bluemix account owner but always getting the following error upon creation:
IBM Cloud Infrastructure exception: Your account is currently prohibited from order 'Computing Instances'.
Any idea what the issue is? There seems to be no direct way to getting support from Public Bluemix to address this issue. We opened a ticket but it has not been addressed.
You should contact IBM Bluemix Support for this kind of question. Before you login to the Bluemix Console, there is a Support link.
From the look of the exception. It seen like you are trying to create a "second" kubernetes cluster. If this is what you are trying to do, you will need a SoftLayer account; or your ID in your SoftLayer account is not setup properly.
You need admin rights to create clusters in Bluemix. Just makes sure that you get the admin status and it should work for you. The normal permissions granted to you are that of an user. Hope this helps

Best practices for setting up developer access to Azure Resources

I would like to find out what the best practices are for managing developers' access to a sub-set of resources on a client's subscription?
I've searched Google and the Azure documentation looking for definitive answers, but I have yet to come across an article that puts it all together. Because Azure is still developing so rapidly I often find it difficult to determine whether a particular article may still be relevant.
To sum up our situation:
I've been tasked with researching and implementing the Azure infrastructure for a web site our company is developing for a client. At the moment our manager and I have access to the client's entire subscription on the Azure Portal by means of the Service Administrator's credentials, even though we're managing only:
Azure Cloud Service running a Web-Role (2-instances with Production and Staging environments).
Azure SQL Database.
Azure Blob Storage for deployments, diagnostics etc.
We're now moving into a phase where more of the developers in the team will require access to perform maintenance type tasks such as performing a VIP swap, retrieving diagnostic info etc.
What is the proper way to manage developer's access on such a project?
The approach I've taken was to implement Role Based Access Control (https://azure.microsoft.com/en-us/documentation/articles/role-based-access-control-configure/)
Move 1, 2, and 3 above into a new Resource Group according to http://blog.kloud.com.au/2015/03/24/moving-resources-between-azure-resource-groups/
Creating a new User Group for our company, say "GroupXYZ".
Adding the "GroupXYZ" to the Contributor role.
Adding the particular developer's company accounts to "GroupXYZ"
Motivation for taking the role-based approach
From what I understand giving everyone access as a Co-Administrator would mean that they have full access to every subscription in the portal.
Account-based authentication is preferable to certificate-based authentication due to the complexity added by managing the certificates.
What caused me to question my approach was the fact that I could not perform a VIP swap against the Cloud Service using PowerShell; I received an error message stating that a certificate could not be found.
Do such role-based accounts only have access to Azure by means of the Resource Manager Commandlets?
I had to switch PowerShell to the Azure Service Manager (ASM) Mode before having access to the Move-AzureDeployment commandlet.
Something else I'm not sure of is whether or not Visual Studio will have access to those resources (in the Resource Group) when using Role Based Access Control.
When you apply RBAC to Azure as you have or just in general, give access to an account via RBAC, then those accounts can only access Azure via the Azure Resource Manager APIs, whether that's PowerShell, REST or VS.
VS 2015 can access Azure resources via RBAC when using the 2.7 SDK. VS 2013 will have support for it soon.

How to Connect Virtual Machines in a Cloud Service using REST

I'm working with Virtual Machines on Windows Azure and according to the following link:
http://www.windowsazure.com/en-us/manage/windows/how-to-guides/connect-to-a-cloud-service/
it is possible to link various Virtual Machines to the same cloud service. The provided link clearly explains how to do it by means of the Windows Azure Management Portal. Nevertheless, in my case, I want to do the same using the REST API. Anyone knows how can it be done?
Thank you so much in advance,
Abel.
I believe you need to "add role" to add a new VM to an existing IaaS Cloud service: http://msdn.microsoft.com/en-us/library/windowsazure/jj157186.aspx