My DigitalOcean kubernetes cluster is unable to pull images from the DigitalOcean registry. I get the following error message:
Failed to pull image "registry.digitalocean.com/XXXX/php:1.1.39": rpc error: code = Unknown desc = failed to pull and unpack image
"registry.digitalocean.com/XXXXXXX/php:1.1.39": failed to resolve reference
"registry.digitalocean.com/XXXXXXX/php:1.1.39": failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized
I have added the kubernetes cluster using DigitalOcean Container Registry Integration, which shows there successfully both on the registry and the settings for the kubernetes cluster.
I can confirm the above address `registry.digitalocean.com/XXXX/php:1.1.39 matches the one in the registry. I wonder if I’m misunderstanding how the token / login integration works with the registry, but I’m under the impression that this was a “one click” thing and that the cluster would automatically get the connection to the registry after that.
I have tried by logging helm into a registry before pushing, but this did not work (and I wouldn't really expect it to, the cluster should be pulling the image).
It's not completely clear to me how the image pull secrets are supposed to be used.
My helm deployment chart is basically the default for API Platform:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "api-platform.fullname" . }}
labels:
{{- include "api-platform.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "api-platform.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "api-platform.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "api-platform.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}-caddy
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.caddy.image.repository }}:{{ .Values.caddy.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.caddy.image.pullPolicy }}
env:
- name: SERVER_NAME
value: :80
- name: PWA_UPSTREAM
value: {{ include "api-platform.fullname" . }}-pwa:3000
- name: MERCURE_PUBLISHER_JWT_KEY
valueFrom:
secretKeyRef:
name: {{ include "api-platform.fullname" . }}
key: mercure-publisher-jwt-key
- name: MERCURE_SUBSCRIBER_JWT_KEY
valueFrom:
secretKeyRef:
name: {{ include "api-platform.fullname" . }}
key: mercure-subscriber-jwt-key
ports:
- name: http
containerPort: 80
protocol: TCP
- name: admin
containerPort: 2019
protocol: TCP
volumeMounts:
- mountPath: /var/run/php
name: php-socket
#livenessProbe:
# httpGet:
# path: /
# port: admin
#readinessProbe:
# httpGet:
# path: /
# port: admin
resources:
{{- toYaml .Values.resources | nindent 12 }}
- name: {{ .Chart.Name }}-php
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.php.image.repository }}:{{ .Values.php.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.php.image.pullPolicy }}
env:
{{ include "api-platform.env" . | nindent 12 }}
volumeMounts:
- mountPath: /var/run/php
name: php-socket
readinessProbe:
exec:
command:
- docker-healthcheck
initialDelaySeconds: 120
periodSeconds: 3
livenessProbe:
exec:
command:
- docker-healthcheck
initialDelaySeconds: 120
periodSeconds: 3
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumes:
- name: php-socket
emptyDir: {}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
How do I authorize the kubernetes cluster to pull from the registry? Is this a helm thing or a kubernetes only thing?
Thanks!
The problem that you have is that you do not have an image pull secret for your cluster to use to pull from the registry.
You will need to add this to give your cluster a way to authorize its requests to the cluster.
Using the DigitalOcean kubernetes Integration for Container Registry
Digital ocean provides a way to add image pull secrets to a kubernetes cluster in your account. You can link the registry to the cluster in the settings of the registry. Under "DigitalOcean Kuberentes Integration" select edit, then select the cluster you want to link the registry to.
This action adds an image pull secret to all namespaces within the cluster and will be used by default (unless you specify otherwise).
The issue was that API Platform automatically has a default value for imagePullSecrets in the helm chart, which is
imagePullSecrets: []
in values.yaml
So this seems to override kubernetes from accessing imagePullSecrets in the way that I expected. The solution was to add the name of the imagePullSecrets directly to the helm deployment command, like this:
--set "imagePullSecrets[0].name=registry-secret-name-goes-here"
You can view the name of your secret using kubectl get secrets like this:
kubectl get secrets
And the output should look something like this:
NAME TYPE DATA AGE
default-token-lz2ck kubernetes.io/service-account-token 3 38d
registry-secret-name-goes-here kubernetes.io/dockerconfigjson 1 2d16h
Related
I have a Micro service (on Node.js)
I am creating a docker image for it and pushing it to my local registry running at
localhost:5001
While deploying this micro service using helm
helm upgrade --install --wait --set env=dev --set image.tag=localhost:5001/user-service userservice-api ./build/helm --namespace dev --create-namespace --kube-context http://localhost:5001
I get
Error: Kubernetes cluster unreachable: context "http://localhost:5001"
does not exist
How do i find out the issue/resolve it?
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "chart.fullname" . }}
labels:
{{- include "chart.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "chart.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "chart.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 4006
protocol: TCP
env:
- name: ENV
value: "{{ .Values.env }}"
readinessProbe:
httpGet:
path: /health
port: 4006
initialDelaySeconds: 15
periodSeconds: 10
livenessProbe:
httpGet:
path: /health
port: 4006
initialDelaySeconds: 15
periodSeconds: 10
values.yaml
replicaCount: 1
image:
repository: localhost:5001/user-service
Additional Information
Can someone please help me with the issue?
In Dockerfile
RUN npm ci
had to be changed to
In Dockerfile
RUN npm ci --force
resolved issue for me.
I'm attempting to connect and run a pod in an AKS cluster (v1.19.6) as a non-root user with Helm (v3.5.2), and getting a crashback loop with the error I have no name!. The docker image and service runs locally without an issue as the correct user at runtime.
After helm create mychart I set up my security in the values.yaml as :
podSecurityContext:
runAsNonRoot: true
runAsUser: 123
securityContext:
# capabilities:
# drop:
# - ALL
#readOnlyRootFilesystem: false
runAsNonRoot: true
runAsUser: 123
The deployment.yaml is below. I've not modified anything else other than the parameters to connect to my AKS cluster:
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "mychart.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
My Dockerfile ends with
USER 123
EXPOSE 8080
CMD [ "sh", "-c", "./blah; bash"]
Am I correct that this is most likely the issue? How do I go about resolving the problem? Supporting documentation would be very helpful everything I'm finding is outdated.
I created a startup script to start the service with the user declared. Not sure if this is the K8s methodology but it worked. Will leave it unanswered in the event someone has a better solution.
We've just had an increase in traffic to our kubernetes cluster and I've noticed that of our 6 application pods, 2 of them are seemingly not used very much. kubectl top pods returns the following
You can see of the 6 pods, 4 of them are using more than 50% of the CPU (2 vCPU nodes), but two of them aren't really doing much at all.
Our cluster is setup on AWS, using the ALB ingress controller. The load balancer is configured to use the Least outstanding requests rather than Round robin in an attempt to balance things out a bit more, but we're still seeing this imbalance.
Is there any way of determining why this is happening, or if indeed it actually is a problem? I'm hoping it's normal behaviour rather than an issue but I'd rather investigate it.
App deployment config
This is the configuration of the main application pods. Nothing fancy really
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "app.fullname" . }}
labels:
{{- include "app.labels" . | nindent 4 }}
app.kubernetes.io/component: web
spec:
replicas: {{ .Values.app.replicaCount }}
selector:
matchLabels:
app: {{ include "app.fullname" . }}-web
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/config_maps/app-env-vars.yaml") . | sha256sum }}
{{- with .Values.podAnnotations }}
{{- toYaml . | nindent 10 }}
{{- end }}
labels:
{{- include "app.selectorLabels" . | nindent 8 }}
app.kubernetes.io/component: web
app: {{ include "app.fullname" . }}-web
spec:
serviceAccountName: {{ .Values.serviceAccount.web }}
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- {{ include "app.fullname" . }}-web
topologyKey: failure-domain.beta.kubernetes.io/zone
containers:
- name: {{ .Values.image.name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- bundle
args: ["exec", "puma", "-p", "{{ .Values.app.containerPort }}"]
ports:
- name: http
containerPort: {{ .Values.app.containerPort }}
protocol: TCP
readinessProbe:
httpGet:
path: /healthcheck
port: {{ .Values.app.containerPort }}
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 5
resources:
{{- toYaml .Values.resources | nindent 12 }}
envFrom:
- configMapRef:
name: {{ include "app.fullname" . }}-cm-env-vars
- secretRef:
name: {{ include "app.fullname" . }}-secret-rails-master-key
- secretRef:
name: {{ include "app.fullname" . }}-secret-db-credentials
- secretRef:
name: {{ include "app.fullname" . }}-secret-db-url-app
- secretRef:
name: {{ include "app.fullname" . }}-secret-redis-credentials
That's a known issue with Kubernetes.
In short, Kubernetes doesn't load balance long-lived TCP connections.
This excellent article covers it in details.
The load distribution you service is showing complies exactly with the case.
I am new to helm and kubernetes.
My current requirement is to use setup multiple services using a common helm chart.
Here is the scenario.
I have a common docker image for all of the services
for each of the services there are different commands to run. In total there are more than 40 services.
Example
pipenv run python serviceA.py
pipenv run python serviceB.py
pipenv run python serviceC.py
and so on...
Current state of helm chart I have is
demo-helm
|- Chart.yaml
|- templates
|- deployment.yaml
|- _helpers.tpl
|- values
|- values-serviceA.yaml
|- values-serviceB.yaml
|- values-serviceC.yaml
and so on ...
Now, since I want to use the same helm chart and deploy multiple services. How should I do it?
I used following command helm install demo-helm . -f values/values-serviceA.yaml -f values-serviceB.yaml but it only does a deployment for values file provided at the end.
Here is my deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "helm.fullname" . }}
labels:
{{- include "helm.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "helm.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "helm.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: {{- toYaml .Values.command |nindent 12}}
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- name: secrets
mountPath: "/usr/src/app/config.ini"
subPath: config.ini
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
volumes:
- name: secrets
secret:
secretName: sample-application
defaultMode: 0400
Update.
Since my requirement has been updated to add all the values for services in a single file I am able to do it by following.
deployment.yaml
{{- range $service, $val := .Values.services }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ $service }}
labels:
app: {{ .nameOverride }}
spec:
replicas: {{ .replicaCount }}
selector:
matchLabels:
app: {{ .nameOverride }}
template:
metadata:
labels:
app: {{ .nameOverride }}
spec:
imagePullSecrets:
- name: aws-ecr
containers:
- name: {{ $service }}
image: "image-latest-v3"
imagePullPolicy: IfNotPresent
command: {{- toYaml .command |nindent 12}}
resources:
{{- toYaml .resources | nindent 12 }}
volumeMounts:
- name: secrets
mountPath: "/usr/src/app/config.ini"
subPath: config.ini
volumes:
- name: secrets
secret:
secretName: {{ .secrets }}
defaultMode: 0400
{{- end }}
and values.yaml
services:
#Services for region1
serviceA-region1:
nameOverride: "serviceA-region1"
fullnameOverride: "serviceA-region1"
command: ["bash", "-c", "python serviceAregion1.py"]
secrets: vader-search-region2
resources: {}
replicaCount: 5
#Services for region2
serviceA-region2:
nameOverride: "serviceA-region2"
fullnameOverride: "serviceA-region2"
command: ["bash", "-c", "python serviceAregion2.py"]
secrets: vader-search-region2
resources: {}
replicaCount: 5
Now I want to know will the following configuration work with the changes I am posting below for both values.yaml
services:
region:
#Services for region1
serviceA-region1:
nameOverride: "serviceA-region1"
fullnameOverride: "serviceA-region1"
command: ["bash", "-c", "python serviceAregion1.py"]
secrets: vader-search-region2
resources: {}
replicaCount: 5
region:2
#Services for region2
serviceA-region2:
nameOverride: "serviceA-region2"
fullnameOverride: "serviceA-region2"
command: ["bash", "-c", "python serviceAregion2.py"]
secrets: vader-search-region2
resources: {}
replicaCount: 5
and deployment.yaml
{{- range $region, $val := .Values.services.region }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ $region }}-{{ .nameOverride }}
labels:
app: {{ .nameOverride }}
spec:
replicas: {{ .replicaCount }}
selector:
matchLabels:
app: {{ .nameOverride }}
template:
metadata:
labels:
app: {{ .nameOverride }}
spec:
imagePullSecrets:
- name: aws-ecr
containers:
- name: {{ $region }}-{{ .nameOverride }}
image: "image-latest-v3"
imagePullPolicy: IfNotPresent
command: {{- toYaml .command |nindent 12}}
resources:
{{- toYaml .resources | nindent 12 }}
volumeMounts:
- name: secrets
mountPath: "/usr/src/app/config.ini"
subPath: config.ini
volumes:
- name: secrets
secret:
secretName: {{ .secrets }}
defaultMode: 0400
{{- end }}
I can recommend you try a helmfile-based approach. I prefer a 3-file approach.
What you'll need :
helmfile-init.yaml: contains YAML instructions that you might need to use for creating and configuring namespaces etc.
helmfile-backend.yaml: contains all the releases you need to deploy (service1, service2 ...)
helmfile.yaml: paths to the above-mentioned (helmfile-init, helmfile-backend YAML files)
a deployment spec file (app_name.json): a specification file that contains all the information regarding the release (release-name, namespace, helm chart version, application-version, etc.)
Helmfile has made my life a little bit breezy when deploying multiple applications. I will edit this answer with a couple of examples in a few minutes.
Meanwhile, you can refer to the official docs here or the Blue Books if you have Github access on your machine.
helm install demo-helm . -f values/values-serviceA.yaml -f values-serviceB.yaml
When you did like this, serviceB values will override serviceA values. You need to run the command separately with different release name as follow :
helm install demo-helm-A . -f values/values-serviceA.yaml
helm install demo-helm-B . -f values/values-serviceB.yaml
Is there any other approach like I run everything in a loop since the
only difference in each of my values.yaml file is the command section.
So, I can include command in the same file like this > command: > -
["bash", "-c", "python serviceA.py"] > - ["bash", "-c", "python
serviceB.py"] > - ["bash", "-c", "python serviceC.py"] – whoami 20
hours ago
Yes, you can write a fairly simple bash script which will run everything in a loop:
for i in {A..Z}; do sed "s/{{COMMAND}}/[\"bash\", \"-c\", \"python service$i.py\"]/g" values/values-service-template.yaml | helm install demo-helm-$i . -f - ; done
Instead of command: ["bash", "-c", "python serviceAregion1.py"] in your values/values-service-template.yaml file just put command: {{COMMAND}} as it will be substituted with the exact command with every iteration of the loop.
As to {A..Z} put there whatever you need in your case. It might be {A..K} if you only have services named from A to K or {1..40} if instead of letters you prefer numeric values.
The following sed command will substitute {{COMMAND}} fragment in your original values/values-service-template.yaml with the actual command e.g. ["bash", "-c", "python serviceA.py"], ["bash", "-c", "python serviceB.py"] and so on.
sed "s/{{COMMAND}}/[\"bash\", \"-c\", \"python service$i.py\"]/g" values/values-service-template.yaml
Then it will be piped ( | symbol ) to:
helm install demo-helm-$i . -f -
where demo-helm-$i will be expanded e.g. to demo-helm-A but the key element here is - character which means: read from standard input instead of reading from file, which is normally expected after -f flag.
I am trying to deploy the following service:
{{- if .Values.knativeDeploy }}
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
{{- if .Values.service.name }}
name: {{ .Values.service.name }}
{{- else }}
name: {{ template "fullname" . }}
{{- end }}
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
template:
spec:
containers:
- image: quay.io/keycloak/keycloak-gatekeeper:9.0.3
name: gatekeeper-sidecar
ports:
- containerPort: {{ .Values.keycloak.proxyPort }}
env:
- name: KEYCLOAK_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: {{ template "keycloakclient" . }}
key: secret
args:
- --resources=uri=/*
- --discovery-url={{ .Values.keycloak.url }}/auth/realms/{{ .Values.keycloak.realm }}
- --client-id={{ template "keycloakclient" . }}
- --client-secret=$(KEYCLOAK_CLIENT_SECRET)
- --listen=0.0.0.0:{{ .Values.keycloak.proxyPort }} # listen on all interfaces
- --enable-logging=true
- --enable-json-logging=true
- --upstream-url=http://127.0.0.1:{{ .Values.service.internalPort }} # To connect with the main container's port
resources:
{{ toYaml .Values.gatekeeper.resources | indent 12 }}
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
{{- range $pkey, $pval := .Values.env }}
- name: {{ $pkey }}
value: {{ quote $pval }}
{{- end }}
envFrom:
{{ toYaml .Values.envFrom | indent 10 }}
ports:
- containerPort: {{ .Values.service.internalPort }}
livenessProbe:
httpGet:
path: {{ .Values.probePath }}
port: {{ .Values.service.internalPort }}
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
successThreshold: {{ .Values.livenessProbe.successThreshold }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
readinessProbe:
httpGet:
path: {{ .Values.probePath }}
port: {{ .Values.service.internalPort }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
successThreshold: {{ .Values.readinessProbe.successThreshold }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
resources:
{{ toYaml .Values.resources | indent 12 }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}
{{- end }}
Which fails with the following error:
Error from server (BadRequest): error when creating "/tmp/helm-template-workdir-290082188/jx/output/namespaces/jx-staging/env/charts/docs/templates/part0-ksvc.yaml": admission webhook "webhook.serving.knative.dev" denied the request: mutation failed: expected exactly one, got both: spec.template.spec.containers'
Now, if I read the specs (https://knative.dev/v0.15-docs/serving/getting-started-knative-app/), I can see this example:
apiVersion: serving.knative.dev/v1 # Current version of Knative
kind: Service
metadata:
name: helloworld-go # The name of the app
namespace: default # The namespace the app will use
spec:
template:
spec:
containers:
- image: gcr.io/knative-samples/helloworld-go # The URL to the image of the app
env:
- name: TARGET # The environment variable printed out by the sample app
value: "Go Sample v1"
Which has exactly the same structure. Now, my questions are:
How can I validate my yam without waiting for a deployment? Intellij has a k8n plugin, but I can't find the CRD schema for serving.knative.dev/v1 that are machine consumable. (https://knative.dev/docs/serving/spec/knative-api-specification-1.0/)
Is it allowed with knative to have multiple container? (that configuration works perfectly with apiVersion: apps/v1 kind: Deployment)
Multi container is alpha feature in knative version 0.16.
This feature need to be enabled by setting multi-container to enabled in the config-features ConfigMap. So edit the configmap using
kubectl edit cm config-features and enable that feature.
apiVersion: v1
kind: ConfigMap
metadata:
name: config-features
namespace: knative-serving
labels:
serving.knative.dev/release: devel
annotations:
knative.dev/example-checksum: "983ddf13"
data:
_example: |
...
# Indicates whether multi container support is enabled
multi-container: "enabled"
...
What version of Knative are you using?
Support for multiple containers was added as an alpha feature in 0.16. If you're not using 0.16 or later or don't have the alpha flag enabled, the request will probably be blocked.
There were a number of edge cases to define for multi-container support in Knative, so the default was to be conservative and only allow one container until the constraints had been explored.