I am trying to issue a renewable ticket for my principal using a keytab (MIT KDC, Red Hat 7.4):
su - newuser
kinit -r 7d -kt /etc/security/keytabs/newuser.service.keytab newuser/mask1.myhost.com#EXAMPLE.COM
Looking at the flags:
[newuser#mask1 ~]$ klist -f
Ticket cache: FILE:/tmp/krb5cc_2824
Default principal: newuser/mask1.myhost.com#EXAMPLE.COM
Valid starting Expires Service principal
09/27/2018 09:40:32 09/28/2018 09:40:32 krbtgt/EXAMPLE.COM#EXAMPLE.COM
Flags: FI
My /etc/krb5.conf has
[libdefaults]
renew_lifetime = 7d
forwardable = true
default_realm = EXAMPLE.COM
ticket_lifetime = 24h
and my /var/kerberos/krb5kdc/kdc.conf
[realms]
EXAMPLE.COM = {
#master_key_type = aes256-cts
max_renewable_life = 7d 0h 0m 0s
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
default_principal_flags = +renewable
}
What am I missing to get a renewable ticket?
Update:
I was able to make my tickets renewable by doing
kadmin
modprinc -maxrenewlife 7d krbtgt/EXAMPLE.COM#EXAMPLE.COM
modprinc -maxrenewlife 7d +allow_renewable newuser/mask1.myhost.com#EXAMPLE.COM
but this means I would need to do it for every principal. How do I make it so that all tickets are generated as renewable by default?
You can set the default (as renew_lifetime) in the [realms] section of the krb5.conf file.
Related
Traefik 2.2.8 always servs the default certificate with this configuration:
[entryPoints]
[entryPoints.https]
address = ":8001"
[[tls.certificates]]
certFile = "/[...]/x1.y1.z1.crt"
keyFile = "/x1.y1.z1.key"
[[tls.certificates]]
certFile = "/[...]/x2.y2.z2.crt"
keyFile = "/[...]/x2.y2.z2.key"
<a dozen more certificates>
[http.routers.1]
entryPoints = ["https"]
service = "1"
rule = "Host(`x1.y1.z1`)"
[http.routers.1.tls]
[[http.routers.1.tls.domains]]
sans = ["x1.y1.z1"]
[http.services.1]
[http.services.1.loadBalancer]
[[http.services.1.loadBalancer.servers]]
url = "http://internal:10012"
I migrated from V1 so I am sure the certificates work. And even adding SANs for the router doesn't help.
All certificates have to be stored in a special file that is loaded with the File Provider.
We are using Hashicorp Vault with Consul and Filesystem as storage, facing issue on login every time my token duration is infinity but still ask for token need to be sealed for every few hours with again login?
config.hcl:
`ui = true
storage "consul" {
address = "127.0.0.1:8500"
path = "vault"
}
backend "file" {
path = "/mnt/vault/data"
}
listener "tcp" {
address = "127.0.0.1:8200"
tls_disable = 1
}
telemetry {
statsite_address = "127.0.0.1:8125"
disable_hostname = true
}`
Token lookup:
Key Value
• -----
token **********
token_accessor ***********
token_duration ∞
token_renewable false
token_policies ["root"]
identity_policies []
policies ["root"]
Kafka version: kafka_2.1.1(binary)
When I enable the Kerberos I follow the official documents(https://kafka.apache.org/documentation/#security_sasl_kerberos) closely.
When I start the Kafka, I got the following errors:
[2019-02-23 08:55:44,622] ERROR SASL authentication failed using login context 'Client' with exception: {} (org.apache.zookeeper.client.ZooKeeperSaslClient)
javax.security.sasl.SaslException: Error in authenticating with a Zookeeper Quorum member: the quorum member's saslToken is null.
at org.apache.zookeeper.client.ZooKeeperSaslClient.createSaslToken(ZooKeeperSaslClient.java:279)
at org.apache.zookeeper.client.ZooKeeperSaslClient.respondToServer(ZooKeeperSaslClient.java:242)
at org.apache.zookeeper.ClientCnxn$SendThread.readResponse(ClientCnxn.java:805)
at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:94)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:366)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1145)
[2019-02-23 08:55:44,625] ERROR [ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient)
[2019-02-23 08:55:44,746] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
I use almost the default krb5.conf.
includedir /etc/krb5.conf.d/
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
dns_lookup_realm = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
rdns = false
pkinit_anchors = /etc/pki/tls/certs/ca-bundle.crt
default_realm = EXAMPLE.COM
default_ccache_name = KEYRING:persistent:%{uid}
[realms]
EXAMPLE.COM = {
# kdc = kerberos.example.com
# admin_server = kerberos.example.com
kdc = localhost
admin_server = localhost
}
[domain_realm]
# .example.com = EXAMPLE.COM
# example.com = EXAMPLE.COM
The jaas file I passed to the Kafka is as below:
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/localhost.keytab"
principal="kafka/localhost#EXAMPLE.COM";
};
// Zookeeper client authentication
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/localhost.keytab"
principal="kafka/localhost#EXAMPLE.COM";
};
I also set the ENV as below:
"-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf -Dzookeeper.sasl.client.username=kafka"
I have googled a lot of posts but without any progress. I guess the problem may be the "localhost" I use when I create entries in Kerberos. But I'm not quite sure how to workaround. The goal for me is to setup a local Kafka+Kerberos testing environment.
In our case, the krb5 kerberos_config file wasn't read properly. if you're using keytab thru' yml then it'd need to be removed first. This was with IBM JDK though and had to use the following to set System.setProperty("java.security.auth.login.config", JaasConfigFileLocation);
KafkaClient {
com.ibm.security.auth.module.Krb5LoginModule required
useDefaultKeytab=false
credsType=both
principal="xkafka#xxx.NET"
useKeytab="/opt/apps/xxxr/my.keytab";
};
Environment:
Vault + Consul, all latest. Integrating Concourse (3.14.0) with Vault. All tokens and keys are throw-away. This is just a test cluster.
Problem:
No matter what I do, I get 768h as the token_duration value. Also, overnight my approle token keeps expiring no matter what I do. I have to regenerate token and pass it to Concourse and restart the service. I want this token not to expire.
[root#k1 etc]# vault write auth/approle/login role_id="34b73748-7e77-f6ec-c5fd-90c24a5a98f3" secret_id="80cc55f1-bb8b-e96c-78b0-fe61b243832d" duration=0
Key Value
--- -----
token 9a6900b7-062d-753f-131c-a2ac7eb040f1
token_accessor 171aeb1c-d2ce-0261-e20f-8ed6950d1d2a
token_duration 768h
token_renewable true
token_policies ["concourse" "default"]
identity_policies []
policies ["concourse" "default"]
token_meta_role_name concourse
[root#k1 etc]#
So, I use token - 9a6900b7-062d-753f-131c-a2ac7eb040f1 for my Concourse to access secrets and all is good, until 24h later. It gets expired.
I set duration to 0, but It didn't help.
$ vault write auth/approle/role/concourse secret_id_ttl=0 period=0 policies=concourse secret_id_num_uses=0 token_num_uses=0
My modified vaultconfig.hcl looks like this:
storage "consul" {
address = "127.0.0.1:8500"
path = "vault/"
token = "95FBC040-C484-4D16-B489-AA732DB6ADF1"
#token = "0b4bc7c7-7eb0-4060-4811-5f9a7185aa6f"
}
listener "tcp" {
address = "0.0.0.0:8200"
cluster_address = "0.0.0.0:8201"
tls_min_version = "tls10"
tls_disable = 1
}
cluster_addr = "http://192.168.163.132:8201"
api_addr = "http://192.168.163.132:8200"
disable_mlock = true
disable_cache = true
ui = true
default_lease_ttl = 0
cluster_name = "testcluster"
raw_storage_endpoint = true
My Concourse policy is vanilla:
[root#k1 etc]# vault policy read concourse
path "concourse/*" {
policy = "read"
capabilities = ["read", "list"]
}
[root#k1 etc]#
Look up token - 9a6900b7-062d-753f-131c-a2ac7eb040f1
[root#k1 etc]# vault token lookup 9a6900b7-062d-753f-131c-a2ac7eb040f1
Key Value
--- -----
accessor 171aeb1c-d2ce-0261-e20f-8ed6950d1d2a
creation_time 1532521379
creation_ttl 2764800
display_name approle
entity_id 11a0d4ac-10aa-0d62-2385-9e8071fc4185
expire_time 2018-08-26T07:22:59.764692652-05:00
explicit_max_ttl 0
id 9a6900b7-062d-753f-131c-a2ac7eb040f1
issue_time 2018-07-25T07:22:59.238050234-05:00
last_renewal 2018-07-25T07:24:44.764692842-05:00
last_renewal_time 1532521484
meta map[role_name:concourse]
num_uses 0
orphan true
path auth/approle/login
policies [concourse default]
renewable true
ttl 2763645
[root#k1 etc]#
Any pointers, feedback is very appreciated.
Try setting the token_ttl and token_max_ttl parameters instead of the secret_id_ttl when creating the new AppRole.
You should also check your Vault default_lease_ttl and max_lease_ttl, they might be set to 24h
Does anyone successfully setup kubernetes executor/runner on gitlab for CI jobs? I set up mine but its stucking on executing my pipeline indefinitely.
I'm running a runner as a docker container on top of kubernetes cluster and connecting to my gitlab instance for handling my CI builds.
Any working config file would be appreciated.
My runner configuration looks like this:
[[runners]]
name = "kube-executor"
url = "https://gitlab.example.ltd/"
token = "some-token"
executor = "kubernetes"
[runners.cache]
[runners.kubernetes]
host = "https://my-kubernetes-api-address:443"
ca_file = "/etc/ssl/certs/ca.crt"
cert_file = "/etc/ssl/certs/server.crt"
key_file = "/etc/ssl/certs/server.key"
image = "docker:latest"
namespace = "gitlab"
namespace_overwrite_allowed = "ci-.*"
privileged = true
cpu_limit = "1"
memory_limit = "1Gi"
service_cpu_limit = "1"
service_memory_limit = "1Gi"
helper_cpu_limit = "500m"
helper_memory_limit = "100Mi"
poll_interval = 5
poll_timeout = 3600
[runners.kubernetes.volumes]
this throws this error: ERROR: Job failed (system failure): Post https://my-kubernetes-api-address:443/api/v1/namespaces/gitlab/secrets: x509: certificate signed by unknown authority
you are using https, so where are the certs, are they self signed certs? if yes you have to mention --tls-cert-file and --tls-private-key-file flags in your configmap.
Copied from https://stackoverflow.com/a/43362697/432115