Containerized logic app not working when deployed to AKS - kubernetes

We are trying to deploy a logic app as containerized workload in AKS. Following is our Docker file:
FROM mcr.microsoft.com/azure-functions/dotnet:3.0.14492-appservice
ENV AzureWebJobsStorage=<StorageAccount connection string>
ENV AZURE_FUNCTIONS_ENVIRONMENT Development
ENV AzureWebJobsScriptRoot=/home/site/wwwroot
ENV AzureFunctionsJobHost__Logging__Console__IsEnabled=true
ENV FUNCTIONS_V2_COMPATIBILITY_MODE=true
COPY ./bin/release/netcoreapp3.1/publish/ /home/site/wwwroot
Following is our deployment manifest file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: pfna-pgt-sf-pdfextract
namespace: canary
labels:
app: pfna-pgt-sf-pdfextract
spec:
replicas: 1
selector:
matchLabels:
app: pfna-pgt-sf-pdfextract
template:
metadata:
labels:
app: pfna-pgt-sf-pdfextract
spec:
nodeSelector:
beta.kubernetes.io/os: linux
containers:
- name: pfna-pgt-sf-pdfextract
image: "image_link"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
env:
- name: AzureBlob_connectionString
value: <connection_string>
- name: AzureWebJobsStorage
value: <connection_string>
imagePullSecrets:
- name: sbx-acr-secret
---
apiVersion: v1
kind: Service
metadata:
name: pfna-pgt-sf-pdfextract
namespace: canary
labels:
app: pfna-pgt-sf-pdfextract
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http-pfna-pgt-sf-pdfextract
selector:
app: pfna-pgt-sf-pdfextract
Following is connections.json:
{
"serviceProviderConnections": {
"AzureBlob": {
"parameterValues": {
"connectionString": "#appsetting('AzureWebJobsStorage')"
},
"serviceProvider": {
"id": "/serviceProviders/AzureBlob"
},
"displayName": "localAzureBlob"
}
},
"managedApiConnections": {}
}
Following is the host.json:
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle.Workflows",
"version": "[1.*, 2.0.0)"
},
"extensions": {
"workflow": {
"settings": {
"Runtime.Backend.VariableOperation.MaximumStatelessVariableSize": "5000000"
}
}
}
}
The image is running successfully in docker desktop but when deployed to AKS we are getting 'Function host is not running'.
Please help resolve this.

You need to specify WEBSITE_HOSTNAME as well (doesn't matter what it is, just needs to be specified)
That being said, as of today there is another issue that is causing the function host to not start (libadvapi32.dll).

Related

dapr.io/config annotation to access a secret

I am trying to deploy a k8s pos with dapr sidecar container.
I want the dapr container to access a secret key named "MY_KEY" stored in a secret called my-secrets.
I wrote this manifest for the deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: my-namespace
spec:
replicas: 1
selector:
matchLabels:
app: my-app
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: my-app
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: my-app
dapr.io/app-port: "80"
dapr.io/config: |
{
"components": [
{
"name": "secrets",
"value": {
"MY_KEY": {
"secretName": "my-secrets",
"key": "MY_KEY"
}
}
}
]
}
spec:
containers:
- name: my_container
image: <<image_source>>
imagePullPolicy: "Always"
ports:
- containerPort: 80
envFrom:
- secretRef:
name: my-secrets
env:
- name: ASPNETCORE_ENVIRONMENT
value: Development
- name: ASPNETCORE_URLS
value: http://+:80
imagePullSecrets:
- name: <<image_pull_secret>>
but it seems that it cannot create the configuration, the dapr container log is:
time="2023-01-24T09:05:50.927484097Z" level=info msg="starting Dapr Runtime -- version 1.9.5 -- commit f5f847eef8721d85f115729ee9efa820fe7c4cd3" app_id=my-app instance=my-container-6db6f7f6b9-tggww scope=dapr.runtime type=log ver=1.9.5
time="2023-01-24T09:05:50.927525344Z" level=info msg="log level set to: info" app_id=emy-app instance=my-container-6db6f7f6b9-tggww scope=dapr.runtime type=log ver=1.9.5
time="2023-01-24T09:05:50.927709269Z" level=info msg="metrics server started on :9090/" app_id=my-app instance=my-container-6db6f7f6b9-tggww scope=dapr.metrics type=log ver=1.9.5
time="2023-01-24T09:05:50.92795239Z" level=info msg="Initializing the operator client (config: {
"components": [
{
"name": "secrets",
"value": {
"MY_KEY": {
"secretName": "my-secrets",
"key": "MY_KEY"
}
}
}
]
}
)" app_id=my-app instance=my-container-6db6f7f6b9-tggww scope=dapr.runtime type=log ver=1.9.5
time="2023-01-24T09:05:50.93737904Z" level=fatal msg="error loading configuration: rpc error: code = Unknown desc = error getting configuration: Configuration.dapr.io "{
"components": [
{
"name": "secrets",
"value": {
"MY_KEY": {
"secretName": "my-secrets",
"key": "MY_KEY"
}
}
}
]
}" not found" app_id=my-app instance=my-container-6db6f7f6b9-tggww scope=dapr.runtime type=log ver=1.9.5
can anyone tell me what's I am doing wrong?
Thanks in advance for your help.

How to change Kubernets / PostgreSQL connection credentionsls and host

I want to adapt the example here
https://www.enterprisedb.com/blog/how-deploy-pgadmin-kubernetes
I want to change the following
need the host be localhost
the user / password to access postgreSQL server vis Admin4 be :
postgres / admin
But I dont know what shout be changed in the files
pgadmin-secret.yaml file :
(the password is SuperSecret)
sapiVersion: v1
kind: Secret
type: Opaque
metadata:
name: pgadmin
data:
pgadmin-password: U3VwZXJTZWNyZXQ=
pgadmin-configmap.yaml file :
apiVersion: v1
kind: ConfigMap
metadata:
name: pgadmin-config
data:
servers.json: |
{
"Servers": {
"1": {
"Name": "PostgreSQL DB",
"Group": "Servers",
"Port": 5432,
"Username": "postgres",
"Host": "postgres.domain.com",
"SSLMode": "prefer",
"MaintenanceDB": "postgres"
}
}
}
pgadmin-service.yaml file
apiVersion: v1
kind: Service
metadata:
name: pgadmin-service
spec:
ports:
- protocol: TCP
port: 80
targetPort: http
selector:
app: pgadmin
type: NodePort
The pgadmin-statefulset.yaml :
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: pgadmin
spec:
serviceName: pgadmin-service
podManagementPolicy: Parallel
replicas: 1
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
app: pgadmin
template:
metadata:
labels:
app: pgadmin
spec:
terminationGracePeriodSeconds: 10
containers:
- name: pgadmin
image: dpage/pgadmin4:5.4
imagePullPolicy: Always
env:
- name: PGADMIN_DEFAULT_EMAIL
value: user#domain.com
- name: PGADMIN_DEFAULT_PASSWORD
valueFrom:
secretKeyRef:
name: pgadmin
key: pgadmin-password
ports:
- name: http
containerPort: 80
protocol: TCP
volumeMounts:
- name: pgadmin-config
mountPath: /pgadmin4/servers.json
subPath: servers.json
readOnly: true
- name: pgadmin-data
mountPath: /var/lib/pgadmin
volumes:
- name: pgadmin-config
configMap:
name: pgadmin-config
volumeClaimTemplates:
- metadata:
name: pgadmin-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 3Gi
the following commands work OK with the original yaml files ad I get the Admin4 connect form in a web page
But even with the unchanged files, I have a bad username/password error

telepresence: error: workload "xxx-service.default" not found

I have this chart of a personal project deployed in minikube:
---
# Source: frontend/templates/service.yaml
kind: Service
apiVersion: v1
metadata:
name: xxx-app-service
spec:
selector:
app: xxx-app
ports:
- protocol: TCP
port: 3000
targetPort: 3000
---
# Source: frontend/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: '3'
creationTimestamp: '2022-06-19T21:57:01Z'
generation: 3
labels:
app: xxx-app
name: xxx-app
namespace: default
resourceVersion: '43299'
uid: 7c43767a-abbd-4806-a9d2-6712847a0aad
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: xxx-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: xxx-app
spec:
containers:
- image: "registry.gitlab.com/esavara/xxx/wm:staging"
name: frontend
imagePullPolicy: Always
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
ports:
- containerPort: 3000
livenessProbe:
httpGet:
path: /
port: 3000
initialDelaySeconds: 10
periodSeconds: 3
env:
- name: PORT
value: "3000"
resources: {}
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: regcred
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
---
# Source: frontend/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
creationTimestamp: '2022-06-19T22:28:58Z'
generation: 1
name: xxx-app
namespace: default
resourceVersion: '44613'
uid: b58dcd17-ee1f-42e5-9dc7-d915a21f97b5
spec:
ingressClassName: nginx
rules:
- http:
paths:
- backend:
service:
name: "xxx-app-service"
port:
number: 3000
path: /
pathType: Prefix
status:
loadBalancer:
ingress:
- ip: 192.168.39.80
---
# Source: frontend/templates/gitlab-registry-sealed.json
{
"kind": "SealedSecret",
"apiVersion": "bitnami.com/v1alpha1",
"metadata": {
"name": "regcred",
"namespace": "default",
"creationTimestamp": null
},
"spec": {
"template": {
"metadata": {
"name": "regcred",
"namespace": "default",
"creationTimestamp": null
},
"type": "kubernetes.io/dockerconfigjson",
"data": null
},
"encryptedData": {
".dockerconfigjson": "AgBpHoQw1gBq0IFFYWnxlBLLYl1JC23TzbRWGgLryVzEDP8p+NAGjngLFZmtklmCEHLK63D9pp3zL7YQQYgYBZUjpEjj8YCTOhvnjQIg7g+5b/CPXWNoI5TuNexNJFKFv1mN5DzDk9np/E69ogRkDJUvUsbxvVXs6/TKGnRbnp2NuI7dTJ18QgGXdLXs7S416KE0Yid9lggw06JrDN/OSxaNyUlqFGcRJ6OfGDAHivZRV1Kw9uoX06go3o+AjVd6eKlDmmvaY/BOc52bfm7pY2ny1fsXGouQ7R7V1LK0LcsCsKdAkg6/2DU3v32mWZDKJgKkK5efhTQr1KGOBoLuuHKX6nF5oMA1e1Ii3wWe77lvWuvlpaNecCBTc7im+sGt0dyJb4aN5WMLoiPGplGqnuvNqEFa/nhkbwXm5Suke2FQGKyzDsMIBi9p8PZ7KkOJTR1s42/8QWVggTGH1wICT1RzcGzURbanc0F3huQ/2RcTmC4UvYyhCUqr3qKrbAIYTNBayfyhsBaN5wVRnV5LiPxjLbbOmObSr/8ileJgt1HMQC3V9pVLZobaJvlBjr/mmNlrF118azJOFY+a/bqzmtBWwlcvVuel/EaOb8uA8mgwfnbnShMinL1OWTHoj+D0ayFmUdsQgMYwRmStnC7x/6OXThmBgfyrLguzz4V2W8O8qbdDz+O5QoyboLUuR9WQb/ckpRio2qa5tidnKXzZzbWzFEevv9McxvCC1+ovvw4IullU8ra3FutnTentmPHKU2OPr1EhKgFKIX20u8xZaeIJYLCiZlVQohylpjkHnBZo1qw+y1CTiDwytunpmkoGGAXLx++YQSjEJEo889PmVVlSwZ8p/Rdusiz1WbgKqFt27yZaOfYzR2bT++HuB5x6zqfK6bbdV/UZndXs"
}
}
}
I'm trying to use Telepresence to redirect the traffic from the deployed application to a Docker container which have my project mounted inside and has hot-reloading, to continue the development of it but inside Kubernetes, but running telepresence intercept xxx-app-service --port 3000:3000 --env-file /tmp/xxx-app-service.env fails with the following error:
telepresence: error: workload "xxx-app-service.default" not found
Why is this happening and how do I fix it?

why pulumi kubernetes provider is changing the service and deployment name?

I tried to convert the below working kubernetes manifest from
##namespace
---
apiVersion: v1
kind: Namespace
metadata:
name: poc
##postgress
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: db
name: db
namespace: poc
spec:
replicas: 1
selector:
matchLabels:
app: db
template:
metadata:
labels:
app: db
spec:
containers:
- image: postgres
name: postgres
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
ports:
- containerPort: 5432
name: postgres
---
apiVersion: v1
kind: Service
metadata:
labels:
app: db
name: db
namespace: poc
spec:
type: ClusterIP
ports:
- name: "db-service"
port: 5432
targetPort: 5432
selector:
app: db
##adminer
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ui
name: ui
namespace: poc
spec:
replicas: 1
selector:
matchLabels:
app: ui
template:
metadata:
labels:
app: ui
spec:
containers:
- image: adminer
name: adminer
ports:
- containerPort: 8080
name: ui
---
apiVersion: v1
kind: Service
metadata:
labels:
app: ui
name: ui
namespace: poc
spec:
type: NodePort
ports:
- name: "ui-service"
port: 8080
targetPort: 8080
selector:
app: ui
to
import * as k8s from "#pulumi/kubernetes";
import * as kx from "#pulumi/kubernetesx";
//db
const dbLabels = { app: "db" };
const dbDeployment = new k8s.apps.v1.Deployment("db", {
spec: {
selector: { matchLabels: dbLabels },
replicas: 1,
template: {
metadata: { labels: dbLabels },
spec: {
containers: [
{
name: "postgres",
image: "postgres",
env: [{ name: "POSTGRES_USER", value: "postgres"},{ name: "POSTGRES_PASSWORD", value: "postgres"}],
ports: [{containerPort: 5432}]
}
]
}
}
}
});
const dbService = new k8s.core.v1.Service("db", {
metadata: { labels: dbDeployment.spec.template.metadata.labels },
spec: {
selector: dbLabels,
type: "ClusterIP",
ports: [{ port: 5432, targetPort: 5432, protocol: "TCP" }],
}
});
//adminer
const uiLabels = { app: "ui" };
const uiDeployment = new k8s.apps.v1.Deployment("ui", {
spec: {
selector: { matchLabels: uiLabels },
replicas: 1,
template: {
metadata: { labels: uiLabels },
spec: {
containers: [
{
name: "adminer",
image: "adminer",
ports: [{containerPort: 8080}],
}
]
}
}
}
});
const uiService = new k8s.core.v1.Service("ui", {
metadata: { labels: uiDeployment.spec.template.metadata.labels },
spec: {
selector: uiLabels,
type: "NodePort",
ports: [{ port: 8080, targetPort: 8080, protocol: "TCP" }]
}
});
With this pulumi up -y is SUCCESS without error but the application is not fully UP and RUNNING. Because the adminer image is trying to use Postgres database hostname as db, But looks like pulumi is changing the service name like below:
My question here is, How to make this workable?
Is there a way in pulumi to strict with the naming?
Note- I know we can easily pass the hostname as an env variable to the adminer image but I am wondering if there is anything that can allow us to not change the name.
Pulumi automatically adds random strings to your resources to help with replacing resource. You can find more information about this in the FAQ
If you'd like to disable this, you can override it using the metadata, like so:
import * as k8s from "#pulumi/kubernetes";
import * as kx from "#pulumi/kubernetesx";
//db
const dbLabels = { app: "db" };
const dbDeployment = new k8s.apps.v1.Deployment("db", {
spec: {
selector: { matchLabels: dbLabels },
replicas: 1,
template: {
metadata: { labels: dbLabels },
spec: {
containers: [
{
name: "postgres",
image: "postgres",
env: [{ name: "POSTGRES_USER", value: "postgres"},{ name: "POSTGRES_PASSWORD", value: "postgres"}],
ports: [{containerPort: 5432}]
}
]
}
}
}
});
const dbService = new k8s.core.v1.Service("db", {
metadata: {
name: "db", // explicitly set a name on the service
labels: dbDeployment.spec.template.metadata.labels
},
spec: {
selector: dbLabels,
type: "ClusterIP",
ports: [{ port: 5432, targetPort: 5432, protocol: "TCP" }],
}
});
With that said, it's not always best practice to hardcode names like this, you should, if possible, reference outputs from your resources and pass them to new resources.

Skipper https rest end point requests returning http urls

I am trying a poc with Spring cloud dataflow streams and have the the application iis running in Pivotal Cloud Foundry. Trying the same in kubernetes and the spring dataflow server dashboard is not loading.Debugged the issue and found the root cause is when the dashboard is loaded, its trying to hit the Skipper rest end point /api and this returns a response with the urls of other end points in skipper but the return urls are all in http. How can i force skipper to return https urls instead of http? Below is the response when i try to curl the same endpoints .
C:>curl -k https:///api
RESPONSE FROM SKIPPER
{
"_links" : {
"repositories" : {
"href" : "http://<skipper_url>/api/repositories{?page,size,sort}",
"templated" : true
},
"deployers" : {
"href" : "http://<skipper_url>/api/deployers{?page,size,sort}",
"templated" : true
},
"releases" : {
"href" : "http://<skipper_url>/api/releases{?page,size,sort}",
"templated" : true
},
"packageMetadata" : {
"href" : "**http://<skipper_url>/api/packageMetadata{?page,size,sort,projection}**",
"templated" : true
},
"about" : {
"href" : "http://<skipper_url>/api/about"
},
"release" : {
"href" : "http://<skipper_url>/api/release"
},
"package" : {
"href" : "http://<skipper_url>/api/package"
},
"profile" : {
"href" : "http://<skipper_url>/api/profile"
}
}
}
kubernetes deployment yml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: skipper-server-network-policy
spec:
podSelector:
matchLabels:
app: skipper-server
ingress:
- from:
- namespaceSelector:
matchLabels:
gkp_namespace: ingress-nginx
egress:
- {}
policyTypes:
- Ingress
- Egress
---
apiVersion: v1
kind: Secret
metadata:
name: poc-secret
data:
.dockerconfigjson: ewogICJhdXRocyI6
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: skipper-server
labels:
app: skipper-server
spec:
replicas: 1
selector:
matchLabels:
app: skipper-server
template:
metadata:
labels:
app: skipper-server
annotations:
kubernetes.io/psp: nonroot
spec:
containers:
- name: skipper-server
image: <image_path>
imagePullPolicy: Always
ports:
- containerPort: 7577
protocol: TCP
resources:
limits:
cpu: "4"
memory: 2Gi
requests:
cpu: 25m
memory: 1Gi
securityContext:
runAsUser: 99
imagePullSecrets:
- name: poc-secret
serviceAccount: spark
serviceAccountName: spark
---
apiVersion: v1
kind: Service
metadata:
name: skipper-server
labels:
app: skipper-server
spec:
ports:
- port: 80
targetPort: 7577
protocol: TCP
name: http
selector:
app: skipper-server
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: skipper-server
annotations:
ingress.kubernetes.io/ssl-passthrough: "true"
ingress.kubernetes.io/secure-backends: "true"
kubernetes.io/ingress.allow.http: true
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
rules:
- host: "<skipper_url>"
http:
paths:
- path: /
backend:
serviceName: skipper-server
servicePort: 80
tls:
- hosts:
- "<skipper_url>"
SKIPPER APPLICATION.properties
spring.datasource.url=jdbc:h2:mem:testdb
spring.datasource.driverClassName=org.h2.Driver
spring.datasource.username=sa
spring.datasource.password=
spring.server.use-forward-headers=true
The root cause was skipper /api end point returning http urls for the /deployer and kubernetes ingress trying to redirect and getting blocked with a 308 error. Added below to skipper env properties and this fixed the issue.
DEPLOYMENT
apiVersion: apps/v1
kind: Deployment
metadata:
name: skipper-server
spec:
containers:
env:
- name: "server.tomcat.internal-proxies"
value: ".*"
- name: "server.use-forward-headers"
value: "true"**
INGRESS
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: skipper-server
annotations:
**nginx.ingress.kubernetes.io/ssl-redirect: false**