I am using the cloud-Assembly to create a VM with following settings:
hostname:redhat-kouvas-1500-localtest
domain: test.local
Cloud Assembly code below:
cloudConfig: |
#cloud-config
preserve_hostname: false
prefer_fqdn_over_hostname: false
hostname: '${input.hostname}'
fqdn: '${input.hostname}.${input.domain}'
What I am getting is the following:
redhat-kouvas-1500-localtest log]# hostname
redhat-kouvas-1500-localtest.test.local
cat /var/lib/cloud/data/set-hostname
{
"fqdn": "redhat-kouvas-1500-localtest.test.local",
"hostname": "redhat-kouvas-1500-localtest"
}
cat /var/lib/cloud/data/previous-hostname
redhat-kouvas-1500-localtest.test.local[root#redhat-kouvas-1500-localtest log]#
Do you know why the cloud-init has this strange behavior??
================================================================================
Copied the following from cloud-init documentation
cloud-init-documentation link
Internal name: cc_set_hostname
Module frequency: once-per-instance
Supported distros: all
Config schema:
preserve_hostname: (boolean) If true, the hostname will not be changed. Default: false.
hostname: (string) The hostname to set.
fqdn: (string) The fully qualified domain name to set.
prefer_fqdn_over_hostname: (boolean) If true, the fqdn will be used if it is set. If false, the hostname will be used. If unset, the result is distro-dependent.
Examples:
preserve_hostname: true
# --- Example2 ---
hostname: myhost
fqdn: myhost.example.com
prefer_fqdn_over_hostname: true
Issue resolved by changing the preserve_hostname: True
Related
I would like to use auto unseal vault mechanism using the GCP KMS.
I have been following this tutorial (section: 'Google KMS Auto Unseal') and applying the official hashicorp helm chart with the following values:
global:
enabled: true
server:
logLevel: "debug"
injector:
logLevel: "debug"
extraEnvironmentVars:
GOOGLE_REGION: global
GOOGLE_PROJECT: ESGI-projects
GOOGLE_APPLICATION_CREDENTIALS: /vault/userconfig/kms-creds/credentials.json
extraVolumes:
- type: 'secret'
name: 'kms-creds'
ha:
enabled: true
replicas: 3
raft:
enabled: true
config: |
ui = true
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
}
seal "gcpckms" {
project = "ESGI-projects"
region = "global"
key_ring = "gitter"
crypto_key = "vault-helm-unseal-key"
}
storage "raft" {
path = "/vault/data"
}
I have created a kms-creds with the json credentials for a service account (I have tried with Cloud KMS Service Agent and owner role but none of them work.
Here are the keys in my key ring :
My cluster is just a local cluster created with kind.
On the first replica of the vault server all seems ok (but not running though):
And on the two others got the normal message claiming that the vault is sealed:
Any idea what could be wrong? Should I create one key for each replica?
OK well, I have succeeded in setting in place the Vault with auto unseal !
What I did:
Change the project (the id was required, not the name)
I disabled the raft (raft.enabled: false)
I moved the backend to google cloud storage adding to the config:
storage "gcs" {
bucket = "gitter-secrets"
ha_enabled = "true"
}
ha_enabled=true was compulsory (with regional bucket)
My final helm values is:
global:
enabled: true
server:
logLevel: "debug"
injector:
logLevel: "debug"
extraEnvironmentVars:
GOOGLE_REGION: global
GOOGLE_PROJECT: esgi-projects-354109
GOOGLE_APPLICATION_CREDENTIALS: /vault/userconfig/kms-creds/credentials.json
extraVolumes:
- type: 'secret'
name: 'kms-creds'
ha:
enabled: true
replicas: 3
raft:
enabled: false
config: |
ui = true
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
}
seal "gcpckms" {
project = "esgi-projects-354109"
region = "global"
key_ring = "gitter"
crypto_key = "vault-helm-unseal-key"
}
storage "gcs" {
bucket = "gitter-secrets"
ha_enabled = "true"
}
Using a service account with permissions:
Cloud KMS CryptoKey Encrypter/Decrypter
Storage Object Admin Permission on gitter-secrets only
I had an issue at first, the vault-0 needed to run a vault operator init. After trying several things (post install hooks among others) and comming back to the initial state the pod were unsealing normally without running anything.
I used scaffolding to generate a new microservice,then I made the following configuration for mongodb:
logging:
level:
ROOT: DEBUG
io.github.jhipster: DEBUG
com.fzai.fileservice: DEBUG
eureka:
instance:
prefer-ip-address: true
client:
service-url:
defaultZone: http://admin:${jhipster.registry.password}#localhost:8761/eureka/
spring:
profiles:
active: dev
include:
- swagger
# Uncomment to activate TLS for the dev profile
#- tls
devtools:
restart:
enabled: true
additional-exclude: static/**
livereload:
enabled: false # we use Webpack dev server + BrowserSync for livereload
jackson:
serialization:
indent-output: true
data:
mongodb:
host: 42.193.124.204
port: 27017
username: admin
password: admin123
authentication-database: fileService
database: fileService
mail:
host: localhost
port: 25
username:
password:
messages:
cache-duration: PT1S # 1 second, see the ISO 8601 standard
thymeleaf:
cache: false
sleuth:
sampler:
probability: 1 # report 100% of traces
zipkin: # Use the "zipkin" Maven profile to have the Spring Cloud Zipkin dependencies
base-url: http://localhost:9411
enabled: false
locator:
discovery:
enabled: true
server:
port: 8081
# ===================================================================
# JHipster specific properties
#
# Full reference is available at: https://www.jhipster.tech/common-application-properties/
# ===================================================================
jhipster:
cache: # Cache configuration
hazelcast: # Hazelcast distributed cache
time-to-live-seconds: 3600
backup-count: 1
management-center: # Full reference is available at: http://docs.hazelcast.org/docs/management-center/3.9/manual/html/Deploying_and_Starting.html
enabled: false
update-interval: 3
url: http://localhost:8180/mancenter
# CORS is disabled by default on microservices, as you should access them through a gateway.
# If you want to enable it, please uncomment the configuration below.
cors:
allowed-origins: "*"
allowed-methods: "*"
allowed-headers: "*"
exposed-headers: "Authorization,Link,X-Total-Count"
allow-credentials: true
max-age: 1800
security:
client-authorization:
access-token-uri: http://uaa/oauth/token
token-service-id: uaa
client-id: internal
client-secret: internal
mail: # specific JHipster mail property, for standard properties see MailProperties
base-url: http://127.0.0.1:8081
metrics:
logs: # Reports metrics in the logs
enabled: false
report-frequency: 60 # in seconds
logging:
use-json-format: false # By default, logs are not in Json format
logstash: # Forward logs to logstash over a socket, used by LoggingConfiguration
enabled: false
host: localhost
port: 5000
queue-size: 512
audit-events:
retention-period: 30 # Number of days before audit events are deleted.
oauth2:
signature-verification:
public-key-endpoint-uri: http://uaa/oauth/token_key
#ttl for public keys to verify JWT tokens (in ms)
ttl: 3600000
#max. rate at which public keys will be fetched (in ms)
public-key-refresh-rate-limit: 10000
web-client-configuration:
#keep in sync with UAA configuration
client-id: web_app
secret: changeit
An error occurred while I was running the project:
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'mongobee' defined in class path resource [com/fzai/fileservice/config/DatabaseConfiguration.class]: Invocation of init method failed; nested exception is com.mongodb.MongoQueryException: Query failed with error code 13 and error message 'not authorized on fileService to execute command { find: "system.indexes", filter: { ns: "fileService.dbchangelog", key: { changeId: 1, author: 1 } }, limit: 1, singleBatch: true, $db: "fileService" }' on server 42.193.124.204:27017
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1771)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:593)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:515)
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:847)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:877)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:549)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:141)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:744)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:391)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:312)
at com.fzai.fileservice.FileServiceApp.main(FileServiceApp.java:70)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.springframework.boot.devtools.restart.RestartLauncher.run(RestartLauncher.java:49)
Caused by: com.mongodb.MongoQueryException: Query failed with error code 13 and error message 'not authorized on fileService to execute command { find: "system.indexes", filter: { ns: "fileService.dbchangelog", key: { changeId: 1, author: 1 } }, limit: 1, singleBatch: true, $db: "fileService" }' on server 42.193.124.204:27017
at com.mongodb.operation.FindOperation$1.call(FindOperation.java:706)
at com.mongodb.operation.FindOperation$1.call(FindOperation.java:695)
at com.mongodb.operation.OperationHelper.withConnectionSource(OperationHelper.java:462)
at com.mongodb.operation.OperationHelper.withConnection(OperationHelper.java:406)
at com.mongodb.operation.FindOperation.execute(FindOperation.java:695)
at com.mongodb.operation.FindOperation.execute(FindOperation.java:83)
at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:179)
at com.mongodb.client.internal.FindIterableImpl.first(FindIterableImpl.java:198)
at com.github.mongobee.dao.ChangeEntryIndexDao.findRequiredChangeAndAuthorIndex(ChangeEntryIndexDao.java:35)
at com.github.mongobee.dao.ChangeEntryDao.ensureChangeLogCollectionIndex(ChangeEntryDao.java:121)
at com.github.mongobee.dao.ChangeEntryDao.connectMongoDb(ChangeEntryDao.java:61)
at com.github.mongobee.Mongobee.execute(Mongobee.java:143)
at com.github.mongobee.Mongobee.afterPropertiesSet(Mongobee.java:126)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1830)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1767)
... 19 common frames omitted
But in my other simple springboot project, I used the same configuration, which can run and use successfully:
spring:
application:
name: springboot1
data:
mongodb:
host: 42.193.124.204
port: 27017
username: admin
password: admin123
authentication-database: fileService
database: fileService
This is the user and role I created:
{
"_id" : "fileService.admin",
"userId" : UUID("03f75395-f129-4273-b6a6-b2dc3d1f7974"),
"user" : "admin",
"db" : "fileService",
"roles" : [
{
"role" : "dbOwner",
"db" : "fileService"
},
{
"role" : "readWrite",
"db" : "fileService"
}
],
"mechanisms" : [
"SCRAM-SHA-1",
"SCRAM-SHA-256"
]
}
I want to know what's wrong.
I want to add monolog in mongodb with default handler(MongoDBHandler) in Symfony 4.
my monolog.yaml file in dev folder
monolog:
handlers:
mongo:
type: mongo
mongo:
id: monolog.logger.mongo
host: '%env(MONGODB_URL)%'
database: '%env(MONGODB_DB)%'
collection: logs
my services.yaml
services:
monolog.logger.mongo:
class: Monolog\Handler\MongoDBHandler
arguments: ['#doctrine_mongodb']
my doctrine_mongodb.yaml
doctrine_mongodb:
auto_generate_proxy_classes: '%kernel.debug%'
auto_generate_hydrator_classes: '%kernel.debug%'
connections:
default:
server: '%env(MONGODB_URL)%'
options:
db: '%env(MONGODB_DB)%'
log:
server: '%env(MONGODB_URL)%'
options:
db: '%env(MONGODB_DB)%'
connect: true
default_database: '%env(MONGODB_DB)%'
document_managers:
log:
auto_mapping: false
logging: false
But doesn't work.
one of the errors:
Cannot autowire service "monolog.logger.mongo": argument "$database"
of method "Monolog\Handler\MongoDBHandler::__construct()" is
type-hinted "string", you should configure its value explicitly.
While i use database option in monolog config.
Is there any document?
Another way to enable mongodb for monolog is:
monolog:
handlers:
mongo:
type: mongo
mongo:
host: '%env(MONGODB_URL)%'
user: myuser
pass: mypass
database: '%env(MONGODB_DB)%'
collection: logs
, So it mean you need to remove id field and add user and pass instead.
If you use doctrine mongodb already, it's possible to re-use it's connection, avoiding more ENV vars to separate the DSN:
monolog:
handlers:
mongo:
type: mongo
mongo:
id: "doctrine_mongodb.odm.default_connection"
database: "%env(MONGODB_DB)%"
collection: MyLogDocument # Keeping this the same, allows you to simply use a doctrine repository to access the documents in your app if needed
level: debug
I get the following error:
Attempted to load class "MongoClient" from the global namespace.
Did you forget a "use" statement?
protected function getMonolog_Handler_MongoService()
{
$this->privates['monolog.handler.mongo'] = $instance = new \Monolog\Handler\MongoDBHandler(new \MongoClient('mongodb://admin:pass#localhost:27017'), 'monolog', 'logs', 100, true);
$instance->pushProcessor(($this->privates['monolog.processor.psr_log_message'] ?? ($this->privates['monolog.processor.psr_log_message'] = new \Monolog\Processor\PsrLogMessageProcessor())));
return $instance;
}
I'm trying to set up codeception to use a sqlite database during testing but i am running into the error bellow. I've tried to include bootstrap/app.php so that the application is running but that didn't fix it. Does anybody have an idea?
I'm using:
lumen v5.7.4
php v7.2.10
codeception v2.5.1
LPaymentTransactionTest.php
public function testReturn(): void
{
\App\DAO\Order::find(1);
}
codeception.yml
paths:
tests: tests
output: tests/_output
data: tests/_data
support: tests/_support
envs: tests/_envs
actor_suffix: Tester
extensions:
enabled:
- Codeception\Extension\RunFailed
modules:
enabled:
- Asserts
- \Helper\Unit
- Db:
dsn: 'sqlite:tests/_data/sqliteTestDb.db'
user: ''
password: ''
# dump: 'tests/_data/test.sql'
dump: 'tests/_data/databaseDump.sql'
populate: true
cleanup: true
full error
Call to a member function connection() on null
/home/projects/vendor/illuminate/database/Eloquent/Model.php:1239
/home/projects/vendor/illuminate/database/Eloquent/Model.php:1205
/home/projects/vendor/illuminate/database/Eloquent/Model.php:1035
/home/projects/vendor/illuminate/database/Eloquent/Model.php:952
/home/projects/vendor/illuminate/database/Eloquent/Model.php:988
/home/projects/vendor/illuminate/database/Eloquent/Model.php:941
/home/projects/vendor/illuminate/database/Eloquent/Model.php:1608
/home/projects/vendor/illuminate/database/Eloquent/Model.php:1620
/home/projects/tests/unit/LPaymentTransactionTest.php:96
/tmp/ide-codeception.php:40
edit:
the model does work outside of the tests. so if i call the model through in routes/web.php it returns the data without a problem.
it just doesn't seem to function within the test
edit2:
looks like the application isn't being launched, will update with fix once i find it
actor: UnitTester
modules:
enabled:
- Asserts
- \Helper\Unit
- Cli
- Lumen
- Db:
dsn: 'sqlite:tests/_data/database.sqlite'
dbname: 'tests/_data/database.sqlite'
dump: 'tests/_data/test.sql'
user: ''
password: ''
populate: true
cleanup: false
reconnect: true
waitlock: 0
step_decorators: ~
After adding rectify gem all tests fail with error:
ActiveRecord::ConnectionNotEstablished:
No connection pool for ActiveRecord::Base
We are using:
Gems:
Rails 4.2.8
Mongoid 5
Rectify 0.9.1
Rspec 3.4.4
Another:
Os: Ubuntu 16.04LTE
MongoDB: 3.4.3
Database run by docker-compose.
docker-compose version 1.12.0
Into the development mode everything works fine.
mongoid.yml
development:
clients:
default:
database: development
hosts:
- localhost:27017
options:
heartbeat_frequency: 10
local_threshold: 0.015
server_selection_timeout: 30
max_pool_size: 5
min_pool_size: 1
wait_queue_timeout: 1
connect_timeout: 10
socket_timeout: 5
ssl: false
ssl_cert: /path/to/my.cert
ssl_key: /path/to/my.key
ssl_key_pass_phrase: password
ssl_verify: true
ssl_ca_cert: /path/to/ca.cert
options:
include_root_in_json: false
include_type_for_serialization: false
preload_models: false
raise_not_found_error: false
scope_overwrite_exception: false
use_activesupport_time_zone: true
use_utc: false
log_level: debug
test:
clients:
default:
database: test
hosts:
- localhost:27017
options:
heartbeat_frequency: 10
local_threshold: 0.015
server_selection_timeout: 30
max_pool_size: 1
min_pool_size: 1
wait_queue_timeout: 4
connect_timeout: 10
socket_timeout: 5
ssl: false
ssl_cert: /path/to/my.cert
ssl_key: /path/to/my.key
ssl_key_pass_phrase: password
ssl_verify: true
ssl_ca_cert: /path/to/ca.cert
options:
include_root_in_json: false
include_type_for_serialization: false
preload_models: false
raise_not_found_error: false
scope_overwrite_exception: false
use_activesupport_time_zone: true
use_utc: false
log_level: debug
Stacktrace
ActiveRecord::ConnectionNotEstablished:
No connection pool for ActiveRecord::Base
# /home/user_home_directory/.rvm/gems/ruby-2.3.3#testing/gems/activerecord-4.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:570:in `retrieve_connection'
# /home/user_home_directory/.rvm/gems/ruby-2.3.3#testing/gems/activerecord-4.2.8/lib/active_record/connection_handling.rb:113:in `retrieve_connection'
# /home/user_home_directory/.rvm/gems/ruby-2.3.3#testing/gems/activerecord-4.2.8/lib/active_record/connection_handling.rb:87:in `connection'
# /home/user_home_directory/.rvm/gems/ruby-2.3.3#testing/gems/activerecord-4.2.8/lib/active_record/fixtures.rb:501:in `create_fixtures'
# /home/user_home_directory/.rvm/gems/ruby-2.3.3#testing/gems/activerecord-4.2.8/lib/active_record/fixtures.rb:979:in `load_fixtures'
# /home/user_home_directory/.rvm/gems/ruby-2.3.3#testing/gems/activerecord-4.2.8/lib/active_record/fixtures.rb:952:in `setup_fixtures'
# /home/user_home_directory/.rvm/gems/ruby-2.3.3#testing/gems/activerecord-4.2.8/lib/active_record/fixtures.rb:826:in `before_setup'
Looks like there is some simple mistake into the test database configuration but i can not figure what exactly.
Any help will be appreciated. Thank you!
Rectify version 0.9.1 does not support Mongoid.
Link to appropriate issue.