ContainerApp Revision Failed when using dapr - azure-bicep

I have created containerapp environment with one containerapp using Bicep template and here is a snippet of how I configured the environment
ingress: {
external: true
targetPort: 80
allowInsecure: false
transport:'http2'
traffic:[
{
latestRevision: true
weight: 100
}
]
}
registries: [
{
server: acr_login_server
username: acr_name
passwordSecretRef: 'myregistrypassword'
}
]
dapr: {
appId: containerapp_name
appPort: 80
appProtocol: 'http'
enabled: true
}
}
I am using http2 transport cause we expose grpc service as well, then when checking the revision, it shows failed and the logs shows that there is issue with dapr
time="2022-10-19T10:35:35.391746798Z" level=fatal msg="error loading configuration: rpc error: code = Unavailable desc = connection error: desc = \"transport: authentication handshake failed: x509: certificate signed by unknown authority (possibly because of \\\"x509: ECDSA verification failure\\\" while trying to verify candidate authority certificate \\\"cluster.local\\\")\"" app_id=containerapp-a instance=containerapp-a--t1gheb2-77c44cf6c6-rxjwx scope=dapr.runtime type=log ver=1.8.4-msft-2

Related

Error core: failed to lookup token: error=failed to read entry, dial tcp [::1]:8500: getsockopt: connection refused in Vault log

We are performing load test on our application using Jmeter, our application uses consul and vault as a backend service for reading/storing application configuration related data. While performing load testing, our application queries the vault for authentication data and this happens for each incoming request. Initially it runs fine for some duration (10 to 15 minutes) and I can see the success response in Jmete, but eventually after sometime the responses starts failing for all the requests. I see the following error in the vault log for each request but do not see any error/exception in the consul log.
Error in Vault log
[ERROR] core: failed to lookup token: error=failed to read entry: Get http://localhost:8500/v1/kv//vault/sys/token/id/87f7b82131cb8fa1ef71aa52579f155d4cf9f095: dial tcp [::1]:8500: getsockopt: connection refused
As of now the load is 100 request (users) in each 10 milliseconds with a ramp-up period of 60 seconds. And this executes over a loop. What could be the cause of this error? Is it due to the limited connection to port 8500
Below is my vault and consul configuration
Vault
backend "consul" {
address = "localhost:8500"
path = "app/vault/"
}
listener "tcp" {
address = "10.88.97.216:8200"
cluster_address = "10.88.97.216:8201"
tls_disable = 0
tls_min_version = "tls12"
tls_cert_file = "/var/certs/vault.crt"
tls_key_file = "/var/certs/vault.key"
}
Consul
{
"data_dir": "/var/consul",
"log_level": "info",
"server": true,
"leave_on_terminate": true,
"ui": true,
"client_addr": "127.0.0.1",
"ports": {
"dns": 53,
"serf_lan": 8301,
"serf_wan" : 8302
},
"disable_update_check": true,
"enable_script_checks": true,
"disable_remote_exec": false,
"domain": "primehome",
"limits": {
"http_max_conns_per_client": 1000,
"rpc_max_conns_per_client": 1000
},
"service": {
"name": "nginx-consul-https",
"port": 443,
"checks": [{
"http": "https://localhost/nginx_status",
"tls_skip_verify": true,
"interval": "10s",
"timeout": "5s",
"status": "passing"
}]
}
}
I have also configured the http_max_conns_per_client & rpc_max_conns_per_client, thinking that it might be due to the limited connection perclicent. But still I am seeing this error in vault log.
After taking another look at this, the issue appears to be that Vault is attempting to contact Consul over the IPv6 loopback address–likely due to the v4 and v6 addresses being present in /etc/hosts–but Consul is only listening on the IPv4 loopback address.
You can likely resolve this through one of the following methods.
Use 127.0.0.1 instead of localhost for Consul's address in the Vault config.
backend "consul" {
address = "127.0.0.1:8500"
path = "app/vault/"
}
Configure Consul to listen on both the IPv4 and IPv6 loopback addresses.
{
"client_addr": "127.0.0.1 [::1]"
}
(Rest of the config omitted for brevity.)
Remove the localhost hostname from the IPv6 loopback in /etc/hosts
127.0.0.1 localhost
# Old hosts entry for ::1
#::1 localhost ip6-localhost ip6-loopback
# New entry
::1 ip6-localhost ip6-loopback

Using jhipster framework to configure mongodb prompt not authorized

I used scaffolding to generate a new microservice,then I made the following configuration for mongodb:
logging:
level:
ROOT: DEBUG
io.github.jhipster: DEBUG
com.fzai.fileservice: DEBUG
eureka:
instance:
prefer-ip-address: true
client:
service-url:
defaultZone: http://admin:${jhipster.registry.password}#localhost:8761/eureka/
spring:
profiles:
active: dev
include:
- swagger
# Uncomment to activate TLS for the dev profile
#- tls
devtools:
restart:
enabled: true
additional-exclude: static/**
livereload:
enabled: false # we use Webpack dev server + BrowserSync for livereload
jackson:
serialization:
indent-output: true
data:
mongodb:
host: 42.193.124.204
port: 27017
username: admin
password: admin123
authentication-database: fileService
database: fileService
mail:
host: localhost
port: 25
username:
password:
messages:
cache-duration: PT1S # 1 second, see the ISO 8601 standard
thymeleaf:
cache: false
sleuth:
sampler:
probability: 1 # report 100% of traces
zipkin: # Use the "zipkin" Maven profile to have the Spring Cloud Zipkin dependencies
base-url: http://localhost:9411
enabled: false
locator:
discovery:
enabled: true
server:
port: 8081
# ===================================================================
# JHipster specific properties
#
# Full reference is available at: https://www.jhipster.tech/common-application-properties/
# ===================================================================
jhipster:
cache: # Cache configuration
hazelcast: # Hazelcast distributed cache
time-to-live-seconds: 3600
backup-count: 1
management-center: # Full reference is available at: http://docs.hazelcast.org/docs/management-center/3.9/manual/html/Deploying_and_Starting.html
enabled: false
update-interval: 3
url: http://localhost:8180/mancenter
# CORS is disabled by default on microservices, as you should access them through a gateway.
# If you want to enable it, please uncomment the configuration below.
cors:
allowed-origins: "*"
allowed-methods: "*"
allowed-headers: "*"
exposed-headers: "Authorization,Link,X-Total-Count"
allow-credentials: true
max-age: 1800
security:
client-authorization:
access-token-uri: http://uaa/oauth/token
token-service-id: uaa
client-id: internal
client-secret: internal
mail: # specific JHipster mail property, for standard properties see MailProperties
base-url: http://127.0.0.1:8081
metrics:
logs: # Reports metrics in the logs
enabled: false
report-frequency: 60 # in seconds
logging:
use-json-format: false # By default, logs are not in Json format
logstash: # Forward logs to logstash over a socket, used by LoggingConfiguration
enabled: false
host: localhost
port: 5000
queue-size: 512
audit-events:
retention-period: 30 # Number of days before audit events are deleted.
oauth2:
signature-verification:
public-key-endpoint-uri: http://uaa/oauth/token_key
#ttl for public keys to verify JWT tokens (in ms)
ttl: 3600000
#max. rate at which public keys will be fetched (in ms)
public-key-refresh-rate-limit: 10000
web-client-configuration:
#keep in sync with UAA configuration
client-id: web_app
secret: changeit
An error occurred while I was running the project:
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'mongobee' defined in class path resource [com/fzai/fileservice/config/DatabaseConfiguration.class]: Invocation of init method failed; nested exception is com.mongodb.MongoQueryException: Query failed with error code 13 and error message 'not authorized on fileService to execute command { find: "system.indexes", filter: { ns: "fileService.dbchangelog", key: { changeId: 1, author: 1 } }, limit: 1, singleBatch: true, $db: "fileService" }' on server 42.193.124.204:27017
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1771)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:593)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:515)
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:847)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:877)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:549)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:141)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:744)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:391)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:312)
at com.fzai.fileservice.FileServiceApp.main(FileServiceApp.java:70)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.springframework.boot.devtools.restart.RestartLauncher.run(RestartLauncher.java:49)
Caused by: com.mongodb.MongoQueryException: Query failed with error code 13 and error message 'not authorized on fileService to execute command { find: "system.indexes", filter: { ns: "fileService.dbchangelog", key: { changeId: 1, author: 1 } }, limit: 1, singleBatch: true, $db: "fileService" }' on server 42.193.124.204:27017
at com.mongodb.operation.FindOperation$1.call(FindOperation.java:706)
at com.mongodb.operation.FindOperation$1.call(FindOperation.java:695)
at com.mongodb.operation.OperationHelper.withConnectionSource(OperationHelper.java:462)
at com.mongodb.operation.OperationHelper.withConnection(OperationHelper.java:406)
at com.mongodb.operation.FindOperation.execute(FindOperation.java:695)
at com.mongodb.operation.FindOperation.execute(FindOperation.java:83)
at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:179)
at com.mongodb.client.internal.FindIterableImpl.first(FindIterableImpl.java:198)
at com.github.mongobee.dao.ChangeEntryIndexDao.findRequiredChangeAndAuthorIndex(ChangeEntryIndexDao.java:35)
at com.github.mongobee.dao.ChangeEntryDao.ensureChangeLogCollectionIndex(ChangeEntryDao.java:121)
at com.github.mongobee.dao.ChangeEntryDao.connectMongoDb(ChangeEntryDao.java:61)
at com.github.mongobee.Mongobee.execute(Mongobee.java:143)
at com.github.mongobee.Mongobee.afterPropertiesSet(Mongobee.java:126)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1830)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1767)
... 19 common frames omitted
But in my other simple springboot project, I used the same configuration, which can run and use successfully:
spring:
application:
name: springboot1
data:
mongodb:
host: 42.193.124.204
port: 27017
username: admin
password: admin123
authentication-database: fileService
database: fileService
This is the user and role I created:
{
"_id" : "fileService.admin",
"userId" : UUID("03f75395-f129-4273-b6a6-b2dc3d1f7974"),
"user" : "admin",
"db" : "fileService",
"roles" : [
{
"role" : "dbOwner",
"db" : "fileService"
},
{
"role" : "readWrite",
"db" : "fileService"
}
],
"mechanisms" : [
"SCRAM-SHA-1",
"SCRAM-SHA-256"
]
}
I want to know what's wrong.

TLS encrypted PostgreSQL connection not possible

I would like to establish a TLS encrypted connection to a PostgreSQL 11 database using Tokio as the framework, Deadpool as the connection pooler and rustls as TLS library.
I developed/modified the following code:
let pool = if let Some(ca_cert) = settings.db_ca_cert {
let mut tls_config = ClientConfig::new();
let cert_file = File::open(&ca_cert)?;
let mut buf = BufReader::new(cert_file);
tls_config.root_store.add_pem_file(&mut buf).map_err(|_| {
anyhow::anyhow!("failed to read database root certificate: {}", ca_cert)
})?;
let tls = MakeRustlsConnect::new(tls_config);
settings.pg.create_pool(tls)?
} else {
settings.pg.create_pool(NoTls)?
};
My test scenario is taken from here:
PostgreSQL 11 docker container (including TLS turned on)
TLS was already tested successfully with the psql client
I now get the following error message and can't explain the problem. I already checked the access rights and other parameters.
/usr/local/bin/cargo run --color=always
Finished dev [unoptimized + debuginfo] target(s) in 0.20s
Running `target/debug/tokio-postgres-rustls-connection-pool-demo`
DEBUG tokio_postgres_rustls_connection_pool_demo > settings: Settings { pg: Config { user: Some("postgres"), password: Some("postgres"), dbname: Some("postgres"), options: Some("sslrootcert=/xxx/tokio-postgres-rustls-connection-pool-demo/docker/files/cert/ca.pem"), application_name: None, ssl_mode: None, host: Some("127.0.0.1"), hosts: None, port: Some(6432), ports: None, connect_timeout: None, keepalives: None, keepalives_idle: None, target_session_attrs: None, channel_binding: None, manager: None, pool: None }, db_ca_cert: None }
Error: Backend(Error { kind: Connect, cause: Some(Os { code: 2, kind: NotFound, message: "No such file or directory" }) })
I looked at the logs of the database and could identify the following error:
[86] LOG: XX000: could not accept SSL connection: Success
[86] LOCATION: be_tls_open_server, be-secure-openssl.c:408
How can I solve the problem?

Error: No valid responses from any peers. Errors: peer=undefined, status=grpc, message=Endorsement has failed

I m working on hyperledger blockchain and Kubernetes(minikube) on Ubuntu 18-04. In my network there two Organisations with one peer each and the orderer type is Solo. This whole network I deployed on minikube. Chaincode install and instantiation did successfully on pods. After that am trying to invoke by using SDK.
I am using the below code as invoke.js
'use strict';
const { Gateway, Wallets } = require('fabric-network');
const fs = require('fs');
const path = require('path');
async function main() {
try {
// load the network configuration
const ccpPath = path.resolve(__dirname, '..', '..', 'first-network', 'connection1-org1.json');
let ccp = JSON.parse(fs.readFileSync(ccpPath, 'utf8'));
// Create a new file system based wallet for managing identities.
const walletPath = path.join(process.cwd(), 'wallet');
const wallet = await Wallets.newFileSystemWallet(walletPath);
console.log(`Wallet path: ${walletPath}`);
// Check to see if we've already enrolled the user.
const identity = await wallet.get('user1');
if (!identity) {
console.log('An identity for the user "user1" does not exist in the wallet');
console.log('Run the registerUser.js application before retrying');
return;
}
// Create a new gateway for connecting to our peer node.
const gateway = new Gateway();
await gateway.connect(ccp, { wallet, identity: 'user1',discovery: { enabled: true, asLocalhost: true } });
// Get the network (channel) our contract is deployed to.
const network = await gateway.getNetwork(channelname);
// Get the contract from the network.
const contract = network.getContract(contractname);
// Submit the specified transaction.
// createCar transaction - requires 5 argument, ex: ('createCar', 'CAR12', 'Honda', 'Accord', 'Black', 'Tom')
// changeCarOwner transaction - requires 2 args , ex: ('changeCarOwner', 'CAR10', 'Dave')
await contract.submitTransaction('arumnet','argument');
console.log('Transaction has been submitted');
// Disconnect from the gateway.
await gateway.disconnect();
} catch (error) {
console.error(`Failed to submit transaction: ${error}`);
process.exit(1);
}
}
main();
configTx.yaml
Organizations:
- &Orderer
Name: Orderer
ID: OrdererMSP
MSPDir: ./crypto-config/ordererOrganizations/acme.com/msp
# Policies are mandatory starting 2.x
Policies: &OrdererPolicies
Readers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Writers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Admins:
Type: Signature
# ONLY Admin Role can carry out administration activities
Rule: "OR('OrdererMSP.admin')"
Endorsement:
Type: Signature
Rule: "OR('OrdererMSP.member')"
- &Acme
Name: Acme
ID: AcmeMSP
MSPDir: ./crypto-config/peerOrganizations/acme.com/msp
Policies: &AcmePolicies
Readers:
Type: Signature
# Any member can READ e.g., query
Rule: "OR('AcmeMSP.member')"
Writers:
Type: Signature
# Any member can WRITE e.g., submit transaction
Rule: "OR('AcmeMSP.member')"
Admins:
Type: Signature
# Either Acme admin OR Orderer Admin can carry out admin activities
Rule: "OR('AcmeMSP.admin')"
Endorsement:
Type: Signature
# Any member can act as an endorser
Rule: "OR('AcmeMSP.member')"
AnchorPeers:
- Host: acme-peer-clusterip
Port: 30751
- &Budget
Name: Budget
ID: BudgetMSP
MSPDir: ./crypto-config/peerOrganizations/budget.com/msp
Policies: &BudgetPolicies
Readers:
Type: Signature
# Any member
Rule: "OR('BudgetMSP.member')"
Writers:
Type: Signature
# Any member
Rule: "OR('BudgetMSP.member')"
Admins:
Type: Signature
# BOTH Budget Admin AND Orderer Admin needed for admin activities
Rule: "OR('BudgetMSP.member')"
Endorsement:
Type: Signature
Rule: "OR('BudgetMSP.member')"
AnchorPeers:
- Host: budget-peer-clusterip
Port: 30851
Connection.json
{
"name": "first-network-acme",
"version": "1.0.0",
"client": {
"organization": "AcmeMSP",
"connection": {
"timeout": {
"peer": {
"endorser": "300"
}
}
}
},
"organizations": {
"AcmeMSP": {
"mspid": "AcmeMSP",
"peers": [
"peer1.acme.com"
],
"certificateAuthorities": [
]
}
},
"channel":{
"airlinechannel":{
"orderers": [
"orderer.acme.com"
],
"peers": {
"peer1.acme.com": {}
}
}
},
"peers": {
"peer1.acme.com": {
"url": "grpc://10.109.214.71:3005",
"tlsCACerts": {
"pem": "/crypto-config/peerOrganizations/acme.com/tlsca/tlsca.acme.com-cert.pem"
},
"grpcOptions": {
"ssl-target-name-override": "peer1.acme.com",
"hostnameOverride": "peer1.acme.com"
}
}
},
"certificateAuthorities": {
}
}
Logs after running invoke.js
2020-11-26T05:31:09.252Z | connectivity_state | dns:localhost:30751 CONNECTING -> CONNECTING
2020-11-26T05:31:09.252Z | dns_resolver | Resolved addresses for target dns:localhost:30751: [127.0.0.1:30751]
2020-11-26T05:31:09.252Z | pick_first | IDLE -> IDLE
2020-11-26T05:31:09.252Z | resolving_load_balancer | dns:localhost:30751 CONNECTING -> IDLE
2020-11-26T05:31:09.253Z | connectivity_state | dns:localhost:30751 CONNECTING -> IDLE
2020-11-26T05:31:09.253Z | pick_first | Connect to address list 127.0.0.1:30751
2020-11-26T05:31:09.253Z | subchannel | 127.0.0.1:30751 refcount 3 -> 4
2020-11-26T05:31:09.253Z | pick_first | IDLE -> TRANSIENT_FAILURE
2020-11-26T05:31:09.253Z | resolving_load_balancer | dns:localhost:30751 IDLE -> TRANSIENT_FAILURE
2020-11-26T05:31:09.253Z | connectivity_state | dns:localhost:30751 IDLE -> TRANSIENT_FAILURE
2020-11-26T05:31:12.254Z - error: [ServiceEndpoint]: Error: Failed to connect before the deadline on Endorser- name: acme-peer-clusterip:30751, url:grpc://localhost:30751, connected:false, connectAttempted:true
2020-11-26T05:31:12.254Z - error: [ServiceEndpoint]: waitForReady - Failed to connect to remote gRPC server acme-peer-clusterip:30751 url:grpc://localhost:30751 timeout:3000
2020-11-26T05:31:12.254Z - error: [DiscoveryService]: _buildPeer[dsg-test] - Unable to connect to the discovered peer acme-peer-clusterip:30751 due to Error: Failed to connect before the deadline on Endorser- name: acme-peer-clusterip:30751, url:grpc://localhost:30751, connected:false, connectAttempted:true
2020-11-26T05:31:12.261Z - error: [DiscoveryHandler]: _build_endorse_group_member >> G1:0 - returning an error endorsement, no endorsement made
2020-11-26T05:31:12.261Z - error: [Transaction]: Error: No valid responses from any peers. Errors:
peer=undefined, status=grpc, message=Endorsement has failed
Failed to submit transaction: Error: No valid responses from any peers. Errors:
peer=undefined, status=grpc, message=Endorsement has failed
Below kubernetes pods setup
kubectl get all
NAME READY STATUS RESTARTS AGE
pod/acme-orderer-0 1/1 Running 0 107m
pod/acme-peer-0 2/2 Running 0 107m
pod/budget-peer-0 2/2 Running 0 107m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/acme-orderer-clusterip ClusterIP 10.108.218.191 <none> 30750/TCP 107m
service/acme-orderer-nodeport NodePort 10.111.186.82 <none> 30750:30750/TCP 107m
service/acme-peer-clusterip ClusterIP 10.98.236.210 <none> 30751/TCP,30752/TCP 107m
service/acme-peer-nodeport NodePort 10.101.38.254 <none> 30751:30751/TCP,30752:30752/TCP 107m
service/budget-peer-clusterip ClusterIP 10.108.194.45 <none> 30851/TCP 107m
service/budget-peer-nodeport NodePort 10.100.136.250 <none> 30851:30851/TCP,30852:30852/TCP 107m
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 112m
service/svc-acme-orderer LoadBalancer 10.105.155.207 10.105.155.207 6005:30696/TCP 27m
service/svc-acme-peer LoadBalancer 10.98.44.14 10.109.214.71 3005:30594/TCP 56m
NAME READY AGE
statefulset.apps/acme-orderer 1/1 107m
statefulset.apps/acme-peer 1/1 107m
statefulset.apps/budget-peer 1/1 10
Some things to check:
connection1-org1.json - you've supplied connection1-org1.json in the code for invoke.js but connection.json config as a candidate you presumably expect to be used to load/access the network. Pls confirm if this is correct.
connection.json - specifies a tlsCACerts but the url scheme used is grpc:// rather than grpcs://
connection.json - does not specify the orderer url. This may not be necessary if it is discoverable via other means
Is TLS enabled on the peers/orderer?
If you can confirm/amend as necessary it will provide a better chance for someone to help.

Tiller: dial tcp 127.0.0.1:80: connect: connection refused

From the time I have upgraded the versions of my eks terraform script. I keep getting error after error.
currently I am stuck on this error:
Error: Get http://localhost/api/v1/namespaces/kube-system/serviceaccounts/tiller: dial tcp 127.0.0.1:80: connect: connection refused
Error: Get http://localhost/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/tiller: dial tcp 127.0.0.1:80: connect: connection refused
The script is working fine and I can still use this with old version but I am trying to upgrade the cluster version .
provider.tf
provider "aws" {
region = "${var.region}"
version = "~> 2.0"
assume_role {
role_arn = "arn:aws:iam::${var.target_account_id}:role/terraform"
}
}
provider "kubernetes" {
config_path = ".kube_config.yaml"
version = "~> 1.9"
}
provider "helm" {
service_account = "${kubernetes_service_account.tiller.metadata.0.name}"
namespace = "${kubernetes_service_account.tiller.metadata.0.namespace}"
kubernetes {
config_path = ".kube_config.yaml"
}
}
terraform {
backend "s3" {
}
}
data "terraform_remote_state" "state" {
backend = "s3"
config = {
bucket = "${var.backend_config_bucket}"
region = "${var.backend_config_bucket_region}"
key = "${var.name}/${var.backend_config_tfstate_file_key}" # var.name == CLIENT
role_arn = "${var.backend_config_role_arn}"
skip_region_validation = true
dynamodb_table = "terraform_locks"
encrypt = "true"
}
}
kubernetes.tf
resource "kubernetes_service_account" "tiller" {
#depends_on = ["module.eks"]
metadata {
name = "tiller"
namespace = "kube-system"
}
automount_service_account_token = "true"
}
resource "kubernetes_cluster_role_binding" "tiller" {
depends_on = ["module.eks"]
metadata {
name = "tiller"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = "cluster-admin"
}
subject {
kind = "ServiceAccount"
name = "tiller"
api_group = ""
namespace = "kube-system"
}
}
terraform version: 0.12.12
eks module version: 6.0.2
It means your server: entry in your .kube_config.yml is pointing to the wrong port (and perhaps even the wrong protocol, as normal kubernetes communication travels over https and is secured via mutual TLS authentication), or there is no longer a proxy that was listening on localhost:80, or perhaps the --insecure-port used to be 80 and is now 0 (as is strongly recommended)
Regrettably, without more specifics, no one can guess what the correct value was or should be changed to
I am sure that there is a need to set up Kubernetes provider on your terraform configuration.
Something like this:
provider "kubernetes" {
config_path = module.EKS_cluster.kubeconfig_filename
}
This happened to me when I miss-configure credentials for terraform with cluster and when there is no access to the cluster. If you configure your kubectl / what ever you are using to authenticate, this should be solved.