I am not able to create a user to access the admin console even if I have removed the values that I had previously set on the yaml file that I use to deploy the helm chart for keycloak for KEYCLOAK_USER and KEYCLOAK_PASSWORD.
I have also tried with the previous values that I had for KEYCLOAK_USER and KEYCLOAK_PASSWORD (admin/admin).
I only have the pvc and pv volumes for postgres when I list them with kubectl get pvc or kubectl get pv.
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
postgres-pv-volume-keycloakx 5Gi RWX Retain Bound default/postgres-pv-claim-keycloakx manual 34h
postgres-pv-volume-old 5Gi RWX Retain Bound default/postgres-pv-claim-old manual 32h
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
postgres-pv-claim-keycloakx Bound postgres-pv-volume-keycloakx 5Gi RWX manual 34h
postgres-pv-claim-old Bound postgres-pv-volume-old 5Gi RWX manual 32h
This is my yaml file for the deployment of the helm chart:
values.yaml:
postgresql:
enabled: false
extraEnv: |
- name: DB_VENDOR
value: postgres
- name: DB_ADDR
value: "192.168.49.2"
- name: DB_PORT
value: "31298"
- name: DB_DATABASE
value: keycloak
- name: DB_USER
value: keycloak
- name: DB_PASSWORD
value: keycloak
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: JDBC_PARAMS
value: "connectTimeout=30000"
Removing the values for KEYCLOAK_USER and KEYCLOAK_PASSWORD as I did on the yaml file, I should be able to create a user to access the admin console but that option to create the user is not appearing on the screen:
On the terminal if I also try with admin/admin I get the following error:
00:09:09,222 WARN [org.keycloak.events] (default task-7) type=LOGIN_ERROR, realmId=master, clientId=security-admin-console, userId=dc446ce2-9456-40e6-8402-9d31f5439178, ipAddress=127.0.0.1, error=invalid_user_credentials, auth_method=openid-connect, auth_type=code, redirect_uri=http://localhost:8080/auth/admin/master/console/, code_id=5b1d57d9-c11b-423b-ba1b-a56bdbbe95b1, username=admin, authSessionParentId=5b1d57d9-c11b-423b-ba1b-a56bdbbe95b1, authSessionTabId=FVdtaK5wOZA
I am running this pod for keycloak and another pod with postgres on minikube with wsl2.
Any help or guidance on this would be appreciated.
Related
I'm trying to get persistent logs as for now after DAG is completed, I get below error:
hello-world-run-a-demo-job-4b7650b6dd784429a54c3bd5e5c983e6
*** Trying to get logs (last 100 lines) from worker pod hello-world-run-a-demo-job-4b7650b6dd784429a54c3bd5e5c983e6 ***
*** Unable to fetch logs from worker pod hello-world-run-a-demo-job-4b7650b6dd784429a54c3bd5e5c983e6 ***
(404)
Reason: Not Found
I have created storage class:
$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
airflow-logs kubernetes.io/azure-file Delete Immediate false 10m
and in Valyes.yaml of Airflow I have used that SC and persistent volume claim was created when I run helm upgrade:
logs:
persistence:
# Enable persistent volume for storing logs
enabled: true
# Volume size for logs
size: 10Gi
# If using a custom storageClass, pass name here
storageClassName: airflow-logs
## the name of an existing PVC to use
existingClaim:
PVC was created automatically:
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
airflow-logs Bound pvc-f13ea0d8-243d-41ea-8ca2-35b48b848e2f 10Gi RWX airflow-logs 18m
But I still get the same 404 error when trying to get into Logs
Persistent volume.yaml:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: airflow-logs
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=50000
- gid=0
- mfsymlinks
- cache=strict
- actimeo=30
parameters:
skuName: Standard_LRS
I've also tried just to create PV and PVC instead of just creating PV and use them explicitly with helm upgrade but it did not changed anything:
helm upgrade --install airflow . \
--set images.airflow.repository=my-repo \
--set images.airflow.tag=latest \
--set logs.persistence.enabled=true \
--set logs.persistence.existingClaim=airflow-logs \
--set images.airflow.pullPolicy=Always \
--set registry.secretName=mysecretrg
I am working on deploying Hyperledger Fabric test network on Kubernetes minikube cluster. I intend to use PersistentVolume to share cytpo-config and channel artifacts among various peers and orderers. Following is my PersistentVolume.yaml and PersistentVolumeClaim.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: persistent-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
path: "/nfsroot"
server: "3.128.203.245"
readOnly: false
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: persistent-volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Following is the pod where the above claim is mount on /data
kind: Pod
apiVersion: v1
metadata:
name: test-shell
labels:
name: test-shell
spec:
containers:
- name: shell
image: ubuntu
command: ["/bin/bash", "-c", "while true ; do sleep 10 ; done"]
volumeMounts:
- mountPath: "/data"
name: pv
volumes:
- name: pv
persistentVolumeClaim:
claimName: persistent-volume-claim
NFS is setup on my EC2 instance. I have verified NFS server is working fine and I was able to mount it inside minikube. I am not understanding what wrong am I doing, but any file present inside 3.128.203.245:/nfsroot is not present in test-shell:/data
What point am I missing. I even tried hostPath mount but to no avail. Please help me out.
I think you should check the following things to verify that NFS is mounted successfully or not
run this command on the node where you want to mount.
$showmount -e nfs-server-ip
like in my case $showmount -e 172.16.10.161
Export list for 172.16.10.161:
/opt/share *
use $df -hT command see that Is NFS is mounted or not like in my case it will give output 172.16.10.161:/opt/share nfs4 91G 32G 55G 37% /opt/share
if not mounted then use the following command
$sudo mount -t nfs 172.16.10.161:/opt/share /opt/share
if the above commands show an error then check firewall is allowing nfs or not
$sudo ufw status
if not then allow using the command
$sudo ufw allow from nfs-server-ip to any port nfs
I made the same setup I don't face any issues. My k8s cluster of fabric is running successfully . The hf k8s yaml files can be found at my GitHub repo. There I have deployed the consortium of Banks on hyperledger fabric which is a dynamic multihost blockchain network that means you can add orgs, peers, join peers, create channels, install and instantiate chaincode on the go in an existing running blockchain network.
By default in minikube you should have default StorageClass:
Each StorageClass contains the fields provisioner, parameters, and reclaimPolicy, which are used when a PersistentVolume belonging to the class needs to be dynamically provisioned.
For example, NFS doesn't provide an internal provisioner, but an external provisioner can be used. There are also cases when 3rd party storage vendors provide their own external provisioner.
Change the default StorageClass
In your example this property can lead to problems.
In order to list enabled addons in minikube please use:
minikube addons list
To list all StorageClasses in your cluster use:
kubectl get sc
NAME PROVISIONER
standard (default) k8s.io/minikube-hostpath
Please note that at most one StorageClass can be marked as default. If two or more of them are marked as default, a PersistentVolumeClaim without storageClassName explicitly specified cannot be created.
In your example the most probable scenario is that you have already default StorageClass. Applying those resources caused: new PV creation (without StoraglClass), new PVC creation (with reference to existing default StorageClass). In this situation there is no reference between your custom pv/pvc binding) as an example please take a look:
kubectl get pv,pvc,sc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/nfs 3Gi RWX Retain Available 50m
persistentvolume/pvc-8aeb802f-cd95-4933-9224-eb467aaa9871 1Gi RWX Delete Bound default/pvc-nfs standard 50m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pvc-nfs Bound pvc-8aeb802f-cd95-4933-9224-eb467aaa9871 1Gi RWX standard 50m
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/standard (default) k8s.io/minikube-hostpath Delete Immediate false 103m
This example will not work due to:
new persistentvolume/nfs has been created (without reference to pvc)
new persistentvolume/pvc-8aeb802f-cd95-4933-9224-eb467aaa9871 has been created using default StorageClass. In the Claim section we can notice that this pv has been created due to dynamic pv provisioning using default StorageClass with reference to default/pvc-nfs claim (persistentvolumeclaim/pvc-nfs ).
Solution 1.
According to the information from the comments:
Also I am able to connect to it within my minikube and also my actual ubuntu system.
I you are able to mount from inside minikube host this nfs share
If you mounted nfs share into your minikube node, please try to use this example with hostpath volume directly from your pod:
apiVersion: v1
kind: Pod
metadata:
name: test-shell
namespace: default
spec:
volumes:
- name: pv
hostPath:
path: /path/shares # path to nfs mount point on minikube node
containers:
- name: shell
image: ubuntu
command: ["/bin/bash", "-c", "sleep 1000 "]
volumeMounts:
- name: pv
mountPath: /data
Solution 2.
If you are using PV/PVC approach:
kind: PersistentVolume
apiVersion: v1
metadata:
name: persistent-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
storageClassName: "" # Empty string must be explicitly set otherwise default StorageClass will be set / or custom storageClassName name
nfs:
path: "/nfsroot"
server: "3.128.203.245"
readOnly: false
claimRef:
name: persistent-volume-claim
namespace: default
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: persistent-volume-claim
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: "" # Empty string must be explicitly set otherwise default StorageClass will be set / or custom storageClassName name
volumeName: persistent-volume
Note:
If you are not referencing any provisioner associated with your StorageClass
Helper programs relating to the volume type may be required for consumption of a PersistentVolume within a cluster. In this example, the PersistentVolume is of type NFS and the helper program /sbin/mount.nfs is required to support the mounting of NFS filesystems.
Please keep in mind that when you are creating pvc kubernetes persistent-controller is trying to bind pvc with proper pv. During this process different factors are take into account like: storageClassName (default/custom), accessModes, claimRef, volumeName.
In this case you can use:
PersistentVolume.spec.claimRef.name: persistent-volume-claim PersistentVolumeClaim.spec.volumeName: persistent-volume
Note:
The control plane can bind PersistentVolumeClaims to matching PersistentVolumes in the cluster. However, if you want a PVC to bind to a specific PV, you need to pre-bind them.
By specifying a PersistentVolume in a PersistentVolumeClaim, you declare a binding between that specific PV and PVC. If the PersistentVolume exists and has not reserved PersistentVolumeClaims through its claimRef field, then the PersistentVolume and PersistentVolumeClaim will be bound.
The binding happens regardless of some volume matching criteria, including node affinity. The control plane still checks that storage class, access modes, and requested storage size are valid.
Once the PV/pvc were created or in case of any problem with pv/pvc binding please use the following commands to figure current state:
kubectl get pv,pvc,sc
kubectl describe pv
kubectl describe pvc
kubectl describe pod
kubectl get events
I'm very new to Kubernetes, and trying to get node-red running on a small cluster of raspberry pi's
I happily managed that, but noticed that once the cluster is powered down, next time I bring it up, the flows in node-red have vanished.
So, I've create a NFS share on a freenas box on my local network and can mount it from another RPI, so I know the permissions work.
However I cannot get my mount to work in a kubernetes deployment.
Any help as to where I have gone wrong please?
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-red
labels:
app: node-red
spec:
replicas: 1
selector:
matchLabels:
app: node-red
template:
metadata:
labels:
app: node-red
spec:
containers:
- name: node-red
image: nodered/node-red:latest
ports:
- containerPort: 1880
name: node-red-ui
securityContext:
privileged: true
volumeMounts:
- name: node-red-data
mountPath: /data
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: TZ
value: Europe/London
volumes:
- name: node-red-data
nfs:
server: 192.168.1.96
path: /mnt/Pool1/ClusterStore/nodered
The error I am getting is
error: error validating "node-red-deploy.yml": error validating data:
ValidationError(Deployment.spec.template.spec): unknown field "nfs" in io.k8s.api.core.v1.PodSpec; if
you choose to ignore these errors, turn validation off with --validate=false
New Information
I now have the following
apiVersion: v1
kind: PersistentVolume
metadata:
name: clusterstore-nodered
labels:
type: nfs
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
path: /mnt/Pool1/ClusterStore/nodered
server: 192.168.1.96
persistentVolumeReclaimPolicy: Recycle
claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: clusterstore-nodered-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
Now when I start the deployment it waits at pending forever and I see the following the the events for the PVC
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 5m47s (x7 over 7m3s) persistentvolume-controller waiting for first consumer to be created before binding
Normal Provisioning 119s (x5 over 5m44s) rancher.io/local-path_local-path-provisioner-58fb86bdfd-rtcls_506528ac-afd0-11ea-930d-52d0b85bb2c2 External provisioner is provisioning volume for claim "default/clusterstore-nodered-claim"
Warning ProvisioningFailed 119s (x5 over 5m44s) rancher.io/local-path_local-path-provisioner-58fb86bdfd-rtcls_506528ac-afd0-11ea-930d-52d0b85bb2c2 failed to provision volume with StorageClass "local-path": Only support ReadWriteOnce access mode
Normal ExternalProvisioning 92s (x19 over 5m44s) persistentvolume-controller
waiting for a volume to be created, either by external provisioner "rancher.io/local-path" or manually created by system administrator
I assume that this is becuase I don't have a nfs provider, in fact if I do kubectl get storageclass I only see local-path
New question, how do I a add a storageclass for NFS? A little googleing around has left me without a clue.
Ok, solved the issue. Kubernetes tutorials are really esoteric and missing lots of assumed steps.
My problem was down to k3s on the pi only shipping with a local-path storage provider.
I finally found a tutorial that installed an nfs client storage provider, and now my cluster works!
This was the tutorial I found the information in.
In the stated Tutorial there are basically these steps to fulfill:
1.
showmount -e 192.168.1.XY
to check if the share is reachable from outside the NAS
2.
helm install nfs-provisioner stable/nfs-client-provisioner --set nfs.server=192.168.1.**XY** --set nfs.path=/samplevolume/k3s --set image.repository=quay.io/external_storage/nfs-client-provisioner-arm
Whereas you replace the IP with your NFS Server and the NFS path with your specific Path on your synology (both should be visible from your showmount -e IP command
Update 23.02.2021
It seems that you have to use another Chart and Image too:
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=192.168.1.**XY** --set nfs.path=/samplevolume/k3s --set image.repository=gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner
kubectl get storageclass
To check if the storageclass now exists
4.
kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' && kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
To configure the new Storage class as "default". Replace nfs-client and local-path with what kubectl get storageclass tells
5.
kubectl get storageclass
Final check, if it's marked as "default"
This is a validation error pointing at the very last part of your Deployment yaml, therefore making it an invalid object. It looks like you've made a mistake with indentations. It should look more like this:
volumes:
- name: node-red-data
nfs:
server: 192.168.1.96
path: /mnt/Pool1/ClusterStore/nodered
Also, as you are new to Kubernetes, I strongly recommend getting familiar with the concepts of PersistentVolumes and its claims. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV.
Please let me know if that helped.
97 Mounting command: systemd-run 98 Mounting arguments:
--description=Kubernetes transient mount for /var/lib/kubelet/pods/06b9ae42-8e99-11e9-b888-a44c24184b19/volumes/kubernetes.io~nfs/nfs-data
--scope -- mount -t nfs 10.100.155.82:/exports/www /var/lib/kubelet/pods/06b9ae42-8e99-11e9-b888-a44c24184b19/volumes/kubernetes.io~nfs/nfs-data
99 Output: Running scope as unit:
run-r26f9da6c287846589bec8d059c33441d.scope 100 mount.nfs: Connection
timed out
101 FailedMount Warning 2019-06-14T11:48:46Z
102 typo3-app-67b58d7657-cvqdg Pod Unable to mount volumes for pod
"typo3-app-67b58d7657-cvqdg_default(1fb4c719-8e9a-11e9-b888-a44c24184b19)":
timeout expired waiting for volumes to attach or mount for pod
"default"/"typo3-app-67b58d7657-cvqdg". list of unmounted
volumes=[nfs-data nfs-data-src]. list of unattached volumes=[nfs-data
nfs-data-src
default-token-lmtl4] FailedMount Warning 2019-06-14T11:49:04Z
Weirdly I have no idea where it's getting 10.100.155.82 from? This was the the previous IP of the ClusterIP (relating to the nfs service)...
apiVersion: apps/v1
kind: Deployment
metadata:
name: typo3-app
labels:
app: typo3
spec:
replicas: 1
selector:
matchLabels:
app: typo3
template:
metadata:
labels:
app: typo3
spec:
containers:
- name: app
image: us.gcr.io/objit-chris/chrisjitit-typo3:v11
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html-chrisjitit
name: nfs-data
- mountPath: /var/www/typo3_src-6.2.6
name: nfs-data-src
volumes:
- name: nfs-data
nfs:
# https://github.com/kubernetes/minikube/issues/3417
# server is not resolved using kube dns (so can't resolve to a service name - hence we need the IP)
#server: 10.11.250.37
server: 10.97.78.206
path: /exports/www
- name: nfs-data-src
nfs:
# https://github.com/kubernetes/minikube/issues/3417
# server is not resolved using kube dns (so can't resolve to a service name - hence we need the IP)
#server: 10.11.250.37
server: 10.97.78.206
path: /exports/www/typo3_src
What may be the cause of this timeout / wrong IP being used?
I tried deleting the deployment, changing the name that still didn't seem to work, but few minutes later it worked... Really strange behavior?
Ran into this issue again...:
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-data 10Gi RWO Retain Available 67m
pvc-f1353542-a8b1-11e9-bdf7-38ffa66115bc 10Gi RWO Delete Bound default/nfs-data standard 67m
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-data Bound pvc-f1353542-a8b1-11e9-bdf7-38ffa66115bc 10Gi RWO standard 67m
It keeps picking up an old NFS IP. It's a bug...:
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/9470ac17-a8b9-11e9-bdf7-38ffa66115bc/volumes/kubernetes.io~nfs/nfs-data-src --scope -- mount -t nfs 10.11.250.37:/exports/www/typo3_src /var/lib/kubelet/pods/9470ac17-a8b9-11e9-bdf7-38ffa66115bc/volumes/kubernetes.io~nfs/nfs-data-src
Output: Running scope as unit: run-ref2095cb52c94d0c87de5458c3b16733.scope
mount.nfs: Connection timed out
I am trying to use the following https://cloud.ibm.com/docs/containers?topic=containers-file_storage#add_file:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ibmc-file
labels:
billingType: 'monthly'
region: us-south
zone: dal10
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 12Gi
storageClassName: ibmc-file-silver
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:11
imagePullPolicy: Always
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: ibmc-file
But the PVC is never "Bound" and gets stuck as "Pending".
➜ postgres-kubernetes kubectl describe pvc ibmc-file
Name: ibmc-file
Namespace: default
StorageClass: ibmc-file-silver
Status: Pending
Volume:
Labels: billingType=monthly
region=us-south
zone=dal10
Annotations: ibm.io/provisioning-status=failed: Storage creation failed with error: {Code:E0013, Description:User doesn't have permissions to create or manage Storage [Backend Error:Validation failed due to missin...
kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"labels":{"billingType":"monthly","region":"us-south","zone":"dal10"},"n...
volume.beta.kubernetes.io/storage-provisioner=ibm.io/ibmc-file
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Provisioning 10m (x3 over 10m) ibm.io/ibmc-file_ibm-file-plugin-5d7684d8c5-xlvks_db50c480-500f-11e9-ba08-cae91657b92d External provisioner is provisioning volume for claim "default/ibmc-file"
Warning ProvisioningFailed 10m (x3 over 10m) ibm.io/ibmc-file_ibm-file-plugin-5d7684d8c5-xlvks_db50c480-500f-11e9-ba08-cae91657b92d failed to provision volume with StorageClass "ibmc-file-silver": Storage creation failed with error: {Code:E0013, Description:User doesn't have permissions to create or manage Storage [Backend Error:Validation failed due to missing permissions[NAS_MANAGE] for User[id:xxx, name:xxxm_2018-11-20-07.35.49, email:xxx, account:xxx]], Type:MissingStoragePermissions, RC:401, Recommended Action(s):Run `ibmcloud ks api-key-info` to see the owner of the API key that is used to order storage. Then, contact the account administrator to add the missing storage permissions. If infrastructure credentials were manually set via `ibmcloud ks credentials-set`, check the permissions of that user. Delete the PVC and re-create it. If the problem persists, open an IBM Cloud support case.}
Normal ExternalProvisioning 7m (x22 over 10m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "ibm.io/ibmc-file" or manually created by system administrator
Normal ExternalProvisioning 11s (x26 over 6m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "ibm.io/ibmc-file" or manually created by system administrator
#atkayla Could you try running kubectl get secret storage-secret-store -n kube-system -o yaml | grep slclient.toml: | awk '{print $2}' | base64 --decode to see what API key is used in the storage secret store? If this also shows your name and email address, then the file storage plug-in uses the permissions that are assigned to you.
You might have the permissions to create the cluster, but you might lack some storage permissions that do not let you create the storage. Are you the owner of the account and have the possibility to check the permissions? You should have Add/Upgrade Storage (StorageLayer), and Storage Manage.
If you do not have these permissions, add these and then run ibmcloud ks api-key-set to update the API key. The storage secret store is automatically refreshed after 5-15 minutes. Then, you can try again.