Mounting k8 permanent volume fails silently - kubernetes

I am trying to mount a PV into a pod with the following:
{
"kind": "PersistentVolume",
"apiVersion": "v1",
"metadata": {
"name": "pv",
"labels": {
"type": "ssd1-zone1"
}
},
"spec": {
"capacity": {
"storage": "150Gi"
},
"hostPath": {
"path": "/mnt/data"
},
"accessModes": [
"ReadWriteOnce"
],
"persistentVolumeReclaimPolicy": "Retain",
"storageClassName": "zone1"
}
}
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "pvc",
"namespace": "clever"
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "150Gi"
}
},
"volumeName": "pv",
"storageClassName": "zone1"
}
}
kind: Pod
apiVersion: v1
metadata:
name: task-pv-pod
spec:
volumes:
- name: pv
persistentVolumeClaim:
claimName: pvc
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: pv
The pod creates propertly and uses the PVC claim without problem. When I ssh into the pod to see the mount, however, the size is 50Gb, which is the size of the attached storage and not the volume I specified.
root#task-pv-pod:/# df -aTh | grep "/html"
/dev/vda1 xfs 50G 13G 38G 26% /usr/share/nginx/html
The PVC appears to be correct to:
root#5139993be066:/# kubectl describe pvc pvc
Name: pvc
Namespace: default
StorageClass: zone1
Status: Bound
Volume: pv
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"pvc","namespace":"default"},"spec":{"accessModes":["ReadWriteO...
pv.kubernetes.io/bind-completed=yes
Finalizers: []
Capacity: 150Gi
Access Modes: RWO
Events: <none>
I have deleted and recreated the volume and the claim many times and tried to use different images for my pod. Nothing is working.

It looks like your /mnt/data is on root partition, hence it provides the same free space as any other folder in rootfs.
The thing about requested and defined capacities for PV/PVC is that these are ony values for matching or hinting dynamic provisioner. In case of hostPath and manually created PV you can define 300TB and it will bind, even if real folder for hostPath has 5G as the real size of the device is not verified (which is reasonable, cause you just trust the data that is provided in PV).
So as I said, check if your /mnt/data is not just part of the rootfs. If you still have the problem provide output of mount command on the node where the pod is running.

Related

the server could not find the metric nginx_vts_server_requests_per_second for pods

I installed the kube-prometheus-0.9.0, and want to deploy a sample application on which to test the Prometheus metrics autoscaling, with the following resource manifest file: (hpa-prome-demo.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: hpa-prom-demo
spec:
selector:
matchLabels:
app: nginx-server
template:
metadata:
labels:
app: nginx-server
spec:
containers:
- name: nginx-demo
image: cnych/nginx-vts:v1.0
resources:
limits:
cpu: 50m
requests:
cpu: 50m
ports:
- containerPort: 80
name: http
---
apiVersion: v1
kind: Service
metadata:
name: hpa-prom-demo
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "80"
prometheus.io/path: "/status/format/prometheus"
spec:
ports:
- port: 80
targetPort: 80
name: http
selector:
app: nginx-server
type: NodePort
For testing purposes, used a NodePort Service and luckly I can get the http repsonse after applying the deployment. Then I installed
Prometheus Adapter via Helm Chart by creating a new hpa-prome-adapter-values.yaml file to override the default Values values, as follows.
rules:
default: false
custom:
- seriesQuery: 'nginx_vts_server_requests_total'
resources:
overrides:
kubernetes_namespace:
resource: namespace
kubernetes_pod_name:
resource: pod
name:
matches: "^(.*)_total"
as: "${1}_per_second"
metricsQuery: (sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>))
prometheus:
url: http://prometheus-k8s.monitoring.svc
port: 9090
Added a rules rule and specify the address of Prometheus. Install Prometheus-Adapter with the following command.
$ helm install prometheus-adapter prometheus-community/prometheus-adapter -n monitoring -f hpa-prome-adapter-values.yaml
NAME: prometheus-adapter
LAST DEPLOYED: Fri Jan 28 09:16:06 2022
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
prometheus-adapter has been deployed.
In a few minutes you should be able to list metrics using the following command(s):
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
Finally the adatper was installed successfully, and can get the http response, as follows.
$ kubectl get po -nmonitoring |grep adapter
prometheus-adapter-665dc5f76c-k2lnl 1/1 Running 0 133m
$ kubectl get --raw="/apis/custom.metrics.k8s.io/v1beta1" | jq
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": [
{
"name": "namespaces/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
}
]
}
But it was supposed to be like this,
$ kubectl get --raw="/apis/custom.metrics.k8s.io/v1beta1" | jq
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": [
{
"name": "namespaces/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
{
"name": "pods/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
}
]
}
Why I can't get the metrics pods/nginx_vts_server_requests_per_second? as a result, below query was also failed.
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/nginx_vts_server_requests_per_second" | jq .
Error from server (NotFound): the server could not find the metric nginx_vts_server_requests_per_second for pods
Anybody cloud please help? many thanks.
ENV:
helm install all Prometheus charts from prometheus-community https://prometheus-community.github.io/helm-chart
k8s cluster enabled by docker for mac
Solution:
I met the same problem, from Prometheus UI, i found it had namespace label and no pod label in metrics as below.
nginx_vts_server_requests_total{code="1xx", host="*", instance="10.1.0.19:80", job="kubernetes-service-endpoints", namespace="default", node="docker-desktop", service="hpa-prom-demo"}
I thought Prometheus may NOT use pod as a label, so i checked Prometheus config and found:
121 - action: replace
122 source_labels:
123 - __meta_kubernetes_pod_node_name
124 target_label: node
then searched
https://prometheus.io/docs/prometheus/latest/configuration/configuration/ and do the similar thing as below under every __meta_kubernetes_pod_node_name i searched(ie. 2 places)
125 - action: replace
126 source_labels:
127 - __meta_kubernetes_pod_name
128 target_label: pod
after a while, the configmap reloaded, UI and API could find pod label
$ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 | jq
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": [
{
"name": "pods/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
{
"name": "namespaces/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
}
]
}
It is worth knowing that using the kube-prometheus repository, you can also install components such as Prometheus Adapter for Kubernetes Metrics APIs, so there is no need to install it separately with Helm.
I will use your hpa-prome-demo.yaml manifest file to demonstrate how to monitor nginx_vts_server_requests_total metrics.
First of all, we need to install Prometheus and Prometheus Adapter with appropriate configuration as described step by step below.
Copy the kube-prometheus repository and refer to the Kubernetes compatibility matrix in order to choose a compatible branch:
$ git clone https://github.com/prometheus-operator/kube-prometheus.git
$ cd kube-prometheus
$ git checkout release-0.9
Install the jb, jsonnet and gojsontoyaml tools:
$ go install -a github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb#latest
$ go install github.com/google/go-jsonnet/cmd/jsonnet#latest
$ go install github.com/brancz/gojsontoyaml#latest
Uncomment the (import 'kube-prometheus/addons/custom-metrics.libsonnet') + line from the example.jsonnet file:
$ cat example.jsonnet
local kp =
(import 'kube-prometheus/main.libsonnet') +
// Uncomment the following imports to enable its patches
// (import 'kube-prometheus/addons/anti-affinity.libsonnet') +
// (import 'kube-prometheus/addons/managed-cluster.libsonnet') +
// (import 'kube-prometheus/addons/node-ports.libsonnet') +
// (import 'kube-prometheus/addons/static-etcd.libsonnet') +
(import 'kube-prometheus/addons/custom-metrics.libsonnet') + <--- This line
// (import 'kube-prometheus/addons/external-metrics.libsonnet') +
...
Add the following rule to the ./jsonnet/kube-prometheus/addons/custom-metrics.libsonnet file in the rules+ section:
{
seriesQuery: "nginx_vts_server_requests_total",
resources: {
overrides: {
namespace: { resource: 'namespace' },
pod: { resource: 'pod' },
},
},
name: { "matches": "^(.*)_total", "as": "${1}_per_second" },
metricsQuery: "(sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>))",
},
After this update, the ./jsonnet/kube-prometheus/addons/custom-metrics.libsonnet file should look like this:
NOTE: This is not the entire file, just an important part of it.
$ cat custom-metrics.libsonnet
// Custom metrics API allows the HPA v2 to scale based on arbirary metrics.
// For more details on usage visit https://github.com/DirectXMan12/k8s-prometheus-adapter#quick-links
{
values+:: {
prometheusAdapter+: {
namespace: $.values.common.namespace,
// Rules for custom-metrics
config+:: {
rules+: [
{
seriesQuery: "nginx_vts_server_requests_total",
resources: {
overrides: {
namespace: { resource: 'namespace' },
pod: { resource: 'pod' },
},
},
name: { "matches": "^(.*)_total", "as": "${1}_per_second" },
metricsQuery: "(sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>))",
},
...
Use the jsonnet-bundler update functionality to update the kube-prometheus dependency:
$ jb update
Compile the manifests:
$ ./build.sh example.jsonnet
Now simply use kubectl to install Prometheus and other components as per your configuration:
$ kubectl apply --server-side -f manifests/setup
$ kubectl apply -f manifests/
After configuring Prometheus, we can deploy a sample hpa-prom-demo Deployment:
NOTE: I've deleted the annotations because I'm going to use a ServiceMonitor to describe the set of targets to be monitored by Prometheus.
$ cat hpa-prome-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hpa-prom-demo
spec:
selector:
matchLabels:
app: nginx-server
template:
metadata:
labels:
app: nginx-server
spec:
containers:
- name: nginx-demo
image: cnych/nginx-vts:v1.0
resources:
limits:
cpu: 50m
requests:
cpu: 50m
ports:
- containerPort: 80
name: http
---
apiVersion: v1
kind: Service
metadata:
name: hpa-prom-demo
labels:
app: nginx-server
spec:
ports:
- port: 80
targetPort: 80
name: http
selector:
app: nginx-server
type: LoadBalancer
Next, create a ServiceMonitor that describes how to monitor our NGINX:
$ cat servicemonitor.yaml
kind: ServiceMonitor
apiVersion: monitoring.coreos.com/v1
metadata:
name: hpa-prom-demo
labels:
app: nginx-server
spec:
selector:
matchLabels:
app: nginx-server
endpoints:
- interval: 15s
path: "/status/format/prometheus"
port: http
After waiting some time, let's check the hpa-prom-demo logs to make sure that it is scrapped correctly:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hpa-prom-demo-bbb6c65bb-49jsh 1/1 Running 0 35m
$ kubectl logs -f hpa-prom-demo-bbb6c65bb-49jsh
...
10.4.0.9 - - [04/Feb/2022:09:29:17 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3771 "-" "Prometheus/2.29.1" "-"
10.4.0.9 - - [04/Feb/2022:09:29:32 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3771 "-" "Prometheus/2.29.1" "-"
10.4.0.9 - - [04/Feb/2022:09:29:47 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3773 "-" "Prometheus/2.29.1" "-"
10.4.0.9 - - [04/Feb/2022:09:30:02 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3773 "-" "Prometheus/2.29.1" "-"
10.4.0.9 - - [04/Feb/2022:09:30:17 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3773 "-" "Prometheus/2.29.1" "-"
10.4.2.12 - - [04/Feb/2022:09:30:23 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3773 "-" "Prometheus/2.29.1" "-"
...
Finally, we can check if our metrics work as expected:
$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/" | jq . | grep -A 7 "nginx_vts_server_requests_per_second"
"name": "pods/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
--
"name": "namespaces/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/nginx_vts_server_requests_per_second" | jq .
{
"kind": "MetricValueList",
"apiVersion": "custom.metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/nginx_vts_server_requests_per_second"
},
"items": [
{
"describedObject": {
"kind": "Pod",
"namespace": "default",
"name": "hpa-prom-demo-bbb6c65bb-49jsh",
"apiVersion": "/v1"
},
"metricName": "nginx_vts_server_requests_per_second",
"timestamp": "2022-02-04T09:32:59Z",
"value": "533m",
"selector": null
}
]
}

What configuration can be done on prometheus adapter in order to get the sum of cpu_usage_seconds_total accross all replicas of a container?

I have a Kubernetes cluster and Prometheus/Prometheus adapter installed.
This is the prometheus adapter configuration rules:
rules:
custom:
- seriesQuery: '{__name__=~"container_cpu_usage_seconds_total"}'
resources:
overrides:
template: "<<.Resource>>"
# namespace:
# resource: namespace
# pod:
# resource: pod
name:
matches: "container_cpu_usage_seconds_total"
as: "my_custom_metric"
metricsQuery: sum(<<.Series>>{container="php-apache"}) by (<<.GroupBy>>)
And this is my hpa configuration:
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: php-apache
minReplicas: 1
maxReplicas: 6
metrics:
- type: Pods
pods:
metric:
name: my_custom_metric
target:
type: Value
averageValue: 250 //limit
---
apiVersion: v1
kind: Service
metadata:
name: php-apache
labels:
run: php-apache
spec:
ports:
- port: 80
selector:
run: php-apache
The problem here is that I want to scale based on the summary of the replicas that container=php-apache use and not with the Average Value of them.
This is the value that is returned from the Prometheus Adapter:
{
"kind": "MetricValueList",
"apiVersion": "custom.metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/malakas"
},
"items": [
{
"describedObject": {
"kind": "Pod",
"namespace": "default",
"name": "php-apache-d4cf67d68-8ddbx",
"apiVersion": "/v1"
},
"metricName": "my_custom_metric",
"timestamp": "2021-04-16T10:52:02Z",
"value": "331827m",
"selector": null
},
{
"describedObject": {
"kind": "Pod",
"namespace": "default",
"name": "php-apache-d4cf67d68-zxkrd",
"apiVersion": "/v1"
},
"metricName": "my_custom_metric",
"timestamp": "2021-04-16T10:52:02Z",
"value": "44478m",
"selector": null
}
]
}
In this example, there are 2 replicas.
I want to get one result (the sum of these two) and not two results just like above in order to pass the result to hpa and scale accordingly.
How can I achieve that?
You should use metrics from the service not from the pod:
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/services/*/my_custom_metric"

Intermittent failure of container mounts in Kubernetes

We are seeing an intermittent failure of volume mount with this error message:
Error: cannot find volume "work" to mount into container "notebook".
The issue happens on ~5% of pod launches (where they all have the same config). The volume is backed by PVC which is created immediately before pod creation.
We are running on GKE with version v1.11.7-gke.12.
Pod manifest is here:
{
apiVersion: 'v1',
kind: 'Pod',
metadata: {
name: 'some pod name',
annotations: {},
labels: {},
},
spec: {
restartPolicy: 'OnFailure',
securityContext: {
fsGroup: 100,
},
automountServiceAccountToken: false,
volumes: [
{
name: 'work',
persistentVolumeClaim: {
claimName: pvcName,
},
},
],
containers: [
{
name: 'notebook',
image,
workingDir: undefined, // this is defined in Dockerfile
ports: [
{
name: 'notebook-port',
containerPort: port,
},
],
args: [...command.split(' '), ...args],
imagePullPolicy: 'IfNotPresent',
volumeMounts: [
{
name: 'work',
mountPath: '/home/jovyan/work',
},
],
resources: {
requests: {
memory: '256M',
},
limits: {
memory: '1G',
},
},
},
{
name: 'watcher',
image: 'gcr.io/deepnote-200602/wacher:0.0.3',
imagePullPolicy: 'Always',
volumeMounts: [
{
name: 'work',
mountPath: '/home/jovyan/work',
},
],
},
],
},
}
}
Any help or ideas would be greatly appreciated! Also, very happy to try any suggestions what other logs/steps might be useful to isolate the issue.
most likely the volume is not bound. can you check and confirm status of below pvc
claimName: pvcName
kubectl get pvc | grep pvcName

Kubernetes using secrets in pod

I have a spring boot app image which needs the following property.
server.ssl.keyStore=/certs/keystore.jks
I am loading the keystore file to secrets using the bewloe command.
kubectl create secret generic ssl-keystore-cert --from-file=./server-ssl.jks
I use the below secret reference in my deployment.yaml
{
"name": "SERVER_SSL_KEYSTORE",
"valueFrom": {
"secretKeyRef": {
"name": "ssl-keystore-cert",
"key": "server-ssl.jks"
}
}
}
With the above reference, I am getting the below error.
Error: failed to start container "app-service": Error response from
daemon: oci runtime error: container_linux.go:265: starting container
process caused "process_linux.go:368: container init caused \"setenv:
invalid argument\"" Back-off restarting failed container
If i go with the volume mount option,
"spec": {
"volumes": [
{
"name": "keystore-cert",
"secret": {
"secretName": "ssl-keystore-cert",
"items": [
{
"key": "server-ssl.jks",
"path": "keycerts"
}
]
}
}
],
"containers": [
{
"env": [
{
"name": "JAVA_OPTS",
"value": "-Dserver.ssl.keyStore=/certs/keystore/keycerts"
}
],
"name": "app-service",
"ports": [
{
"containerPort": 8080,
"protocol": "TCP"
}
],
"volumeMounts": [
{
"name": "keystore-cert",
"mountPath": "/certs/keystore"
}
],
"imagePullPolicy": "IfNotPresent"
}
]
I am getting the below error with the above approach.
Caused by: java.lang.IllegalArgumentException: Resource location must
not be null at
org.springframework.util.Assert.notNull(Assert.java:134)
~[spring-core-4.3.7.RELEASE.jar!/:4.3.7.RELEASE] at
org.springframework.util.ResourceUtils.getURL(ResourceUtils.java:131)
~[spring-core-4.3.7.RELEASE.jar!/:4.3.7.RELEASE] at
org.springframework.boot.context.embedded.jetty.JettyEmbeddedServletContainerFactory.configureSslKeyStore(JettyEmbeddedServletContainerFactory.java:301)
~[spring-boot-1.4.5.RELEASE.jar!/:1.4.5.RELEASE]
I tried with the below option also, instead of JAVA_OPTS,
{
"name": "SERVER_SSL_KEYSTORE",
"value": "/certs/keystore/keycerts"
}
Still the error is same.
Not sure what is the right approach.
I tried to repeat the situation with your configuration. I created a secret used command:
kubectl create secret generic ssl-keystore-cert --from-file=./server-ssl.jks
I used this YAML as a test environment:
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
env:
- name: JAVA_OPTS
value: "-Dserver.ssl.keyStore=/certs/keystore/server-ssl.jks"
ports:
- containerPort: 8080
protocol: TCP
volumeMounts:
- name: secret-volume
readOnly: true
mountPath: "/cert/keystore"
volumes:
- name: secret-volume
secret:
secretName: ssl-keystore-cert
As you see, I used "server-ssl.jks" file name in the variable. If you create the secret from a file, Kubernetes will store this file in the secret. When you mount this secret to any place, you just store the file. You tried to use /certs/keystore/keycerts but it doesn't exist, which you see in logs:
Resource location must not be null at org.springframework.util.Assert.notNull
Because your mounted secret is here /certs/keystore/keycerts/server-ssl.jks
It should work, but just fix the paths

How do I control a kubernetes PersistentVolumeClaim to bind to a specific PersistentVolume?

I have multiple volumes and one claim. How can I tell the claim to which volume to bind to?
How does a PersistentVolumeClaim know to which volume to bind? Can I controls this using some other parameters or metadata?
I have the following PersistentVolumeClaim:
{
"apiVersion": "v1",
"kind": "PersistentVolumeClaim",
"metadata": {
"name": "default-drive-claim"
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "10Gi"
}
}
}
}
{
"apiVersion": "v1",
"kind": "PersistentVolume",
"metadata": {
"name": "default-drive-disk",
"labels": {
"name": "default-drive-disk"
}
},
"spec": {
"capacity": {
"storage": "10Gi"
},
"accessModes": [
"ReadWriteOnce"
],
"gcePersistentDisk": {
"pdName": "a1-drive",
"fsType": "ext4"
}
}
}
If I create the claim and the volume using:
kubectl create -f pvc.json -f pv.json
I get the following listing of the volumes and claims:
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
default-drive-disk name=default-drive-disk 10Gi RWO Bound default/default-drive-claim 2s
NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE
default-drive-claim <none> Bound default-drive-disk 10Gi RWO 2s
How does the claim know to which volume to bind?
The current implementation does not allow your PersistentVolumeClaim to target specific PersistentVolumes. Claims bind to volumes based on its capabilities (access modes) and capacity.
In the works is the next iteration of PersistentVolumes, which includes a PersistentVolumeSelector on the claim. This would work exactly like a NodeSelector on Pod works. The volume would have to match the label selector in order to bind. This is the targeting you are looking for.
Please see https://github.com/kubernetes/kubernetes/pull/17056 for the proposal containing PersistentVolumeSelector.