Gitlab: How to configure Backup when using object-store - kubernetes

we are running GitLab installed in our Kubernetes Cluster, using rook-ceph Rados-Gateway as S3 Storage backend. We want to use the backup-utility delivered in the tools container from gitlab.
As backup target we configured an external minio Instance.
When using the backup-utility, this error messages occurs:
Bucket not found: gitlab-registry-bucket. Skipping backup of registry ...
Bucket not found: gitlab-uploads-bucket. Skipping backup of uploads ...
Bucket not found: gitlab-artifacts-bucket. Skipping backup of artifacts ...
Bucket not found: gitlab-lfs-bucket. Skipping backup of lfs ...
Bucket not found: gitlab-packages-bucket. Skipping backup of packages ...
Bucket not found: gitlab-mr-diffs. Skipping backup of external_diffs ...
Bucket not found: gitlab-terraform-state. Skipping backup of terraform_state ...
Bucket not found: gitlab-pages-bucket. Skipping backup of pages ...
When I'm executing s3cmd ls, I only see the two Backup Buckets on our minio Instance, not the "source" Buckets.
Can someone tell me, how to configure the backup-utility or the s3cmd so it can access both, the Rados-Gateway for the Source Buckets and the minio as Backup Target?
I have tried to insert multiple connections into the .s3cfg File like this:
[target]
host_base = file01.xxx.xxx:80
host_bucket = file01.xxx.xxx:80
use_https = false
bucket_location = us-east-1
access_key = xxx
secret_key = xxx
[source]
host_base = s3.xxx.xxx:80
host_bucket = s3.xxx.xxx:80
use_https = false
bucket_location = us-east-1
access_key = xxx
secret_key = xxx
but that did not show any buckets from the Target when using s3cmd ls.

#Löppinator : Please check GitLab Documentation link here
for values.yaml and sample configuration looks like below :
global:
.
.
.
pages: #pages bucket to be added with connection
enabled: true
host: <hostname>
artifactsServer: true
objectStore:
enabled: true
bucket: <s3-bucket-name>
# proxy_download: true
connection:
secret: <secret-for-s3-connection>
.
.
.
appConfig:
.
.
.
object_store:
enabled: true
proxy_download: true
connection:
secret: <secret-for-s3-connection>
lfs:
enabled: true
proxy_download: false
bucket: <s3-bucket-name>
connection: {}
artifacts:
enabled: true
proxy_download: true
bucket: <s3-bucket-name>
connection: {}
uploads:
enabled: true
proxy_download: true
bucket: <s3-bucket-name>
connection: {}
packages:
enabled: true
proxy_download: true
bucket: <s3-bucket-name>
connection: {}
externalDiffs:
enabled: true
proxy_download: true
bucket: <s3-bucket-name>
connection: {}
terraformState:
enabled: true
bucket: <s3-bucket-name>
connection: {}
ciSecureFiles:
enabled: true
bucket: <s3-bucket-name>
connection: {}
dependencyProxy:
enabled: true
proxy_download: true
bucket: <s3-bucket-name>
connection: {}
backups:
bucket: <s3-bucket-name>
tmpBucket: <s3-bucket-name>
registry: #registry bucket also should be added in S3 and no connection is required here
bucket: <s3-bucket-name>
You have to check indentation to consider Pages and registry buckets which will be under global config and rest of the buckets will be under appConfig if you see my code above.
I hope this helps!!!

Related

cloud-init - Hostname strange behavior

I am using the cloud-Assembly to create a VM with following settings:
hostname:redhat-kouvas-1500-localtest
domain: test.local
Cloud Assembly code below:
cloudConfig: |
#cloud-config
preserve_hostname: false
prefer_fqdn_over_hostname: false
hostname: '${input.hostname}'
fqdn: '${input.hostname}.${input.domain}'
What I am getting is the following:
redhat-kouvas-1500-localtest log]# hostname
redhat-kouvas-1500-localtest.test.local
cat /var/lib/cloud/data/set-hostname
{
"fqdn": "redhat-kouvas-1500-localtest.test.local",
"hostname": "redhat-kouvas-1500-localtest"
}
cat /var/lib/cloud/data/previous-hostname
redhat-kouvas-1500-localtest.test.local[root#redhat-kouvas-1500-localtest log]#
Do you know why the cloud-init has this strange behavior??
================================================================================
Copied the following from cloud-init documentation
cloud-init-documentation link
Internal name: cc_set_hostname
Module frequency: once-per-instance
Supported distros: all
Config schema:
preserve_hostname: (boolean) If true, the hostname will not be changed. Default: false.
hostname: (string) The hostname to set.
fqdn: (string) The fully qualified domain name to set.
prefer_fqdn_over_hostname: (boolean) If true, the fqdn will be used if it is set. If false, the hostname will be used. If unset, the result is distro-dependent.
Examples:
preserve_hostname: true
# --- Example2 ---
hostname: myhost
fqdn: myhost.example.com
prefer_fqdn_over_hostname: true
Issue resolved by changing the preserve_hostname: True

Autounseal Vault with GCP KMS

I would like to use auto unseal vault mechanism using the GCP KMS.
I have been following this tutorial (section: 'Google KMS Auto Unseal') and applying the official hashicorp helm chart with the following values:
global:
enabled: true
server:
logLevel: "debug"
injector:
logLevel: "debug"
extraEnvironmentVars:
GOOGLE_REGION: global
GOOGLE_PROJECT: ESGI-projects
GOOGLE_APPLICATION_CREDENTIALS: /vault/userconfig/kms-creds/credentials.json
extraVolumes:
- type: 'secret'
name: 'kms-creds'
ha:
enabled: true
replicas: 3
raft:
enabled: true
config: |
ui = true
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
}
seal "gcpckms" {
project = "ESGI-projects"
region = "global"
key_ring = "gitter"
crypto_key = "vault-helm-unseal-key"
}
storage "raft" {
path = "/vault/data"
}
I have created a kms-creds with the json credentials for a service account (I have tried with Cloud KMS Service Agent and owner role but none of them work.
Here are the keys in my key ring :
My cluster is just a local cluster created with kind.
On the first replica of the vault server all seems ok (but not running though):
And on the two others got the normal message claiming that the vault is sealed:
Any idea what could be wrong? Should I create one key for each replica?
OK well, I have succeeded in setting in place the Vault with auto unseal !
What I did:
Change the project (the id was required, not the name)
I disabled the raft (raft.enabled: false)
I moved the backend to google cloud storage adding to the config:
storage "gcs" {
bucket = "gitter-secrets"
ha_enabled = "true"
}
ha_enabled=true was compulsory (with regional bucket)
My final helm values is:
global:
enabled: true
server:
logLevel: "debug"
injector:
logLevel: "debug"
extraEnvironmentVars:
GOOGLE_REGION: global
GOOGLE_PROJECT: esgi-projects-354109
GOOGLE_APPLICATION_CREDENTIALS: /vault/userconfig/kms-creds/credentials.json
extraVolumes:
- type: 'secret'
name: 'kms-creds'
ha:
enabled: true
replicas: 3
raft:
enabled: false
config: |
ui = true
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
}
seal "gcpckms" {
project = "esgi-projects-354109"
region = "global"
key_ring = "gitter"
crypto_key = "vault-helm-unseal-key"
}
storage "gcs" {
bucket = "gitter-secrets"
ha_enabled = "true"
}
Using a service account with permissions:
Cloud KMS CryptoKey Encrypter/Decrypter
Storage Object Admin Permission on gitter-secrets only
I had an issue at first, the vault-0 needed to run a vault operator init. After trying several things (post install hooks among others) and comming back to the initial state the pod were unsealing normally without running anything.

Using jhipster framework to configure mongodb prompt not authorized

I used scaffolding to generate a new microservice,then I made the following configuration for mongodb:
logging:
level:
ROOT: DEBUG
io.github.jhipster: DEBUG
com.fzai.fileservice: DEBUG
eureka:
instance:
prefer-ip-address: true
client:
service-url:
defaultZone: http://admin:${jhipster.registry.password}#localhost:8761/eureka/
spring:
profiles:
active: dev
include:
- swagger
# Uncomment to activate TLS for the dev profile
#- tls
devtools:
restart:
enabled: true
additional-exclude: static/**
livereload:
enabled: false # we use Webpack dev server + BrowserSync for livereload
jackson:
serialization:
indent-output: true
data:
mongodb:
host: 42.193.124.204
port: 27017
username: admin
password: admin123
authentication-database: fileService
database: fileService
mail:
host: localhost
port: 25
username:
password:
messages:
cache-duration: PT1S # 1 second, see the ISO 8601 standard
thymeleaf:
cache: false
sleuth:
sampler:
probability: 1 # report 100% of traces
zipkin: # Use the "zipkin" Maven profile to have the Spring Cloud Zipkin dependencies
base-url: http://localhost:9411
enabled: false
locator:
discovery:
enabled: true
server:
port: 8081
# ===================================================================
# JHipster specific properties
#
# Full reference is available at: https://www.jhipster.tech/common-application-properties/
# ===================================================================
jhipster:
cache: # Cache configuration
hazelcast: # Hazelcast distributed cache
time-to-live-seconds: 3600
backup-count: 1
management-center: # Full reference is available at: http://docs.hazelcast.org/docs/management-center/3.9/manual/html/Deploying_and_Starting.html
enabled: false
update-interval: 3
url: http://localhost:8180/mancenter
# CORS is disabled by default on microservices, as you should access them through a gateway.
# If you want to enable it, please uncomment the configuration below.
cors:
allowed-origins: "*"
allowed-methods: "*"
allowed-headers: "*"
exposed-headers: "Authorization,Link,X-Total-Count"
allow-credentials: true
max-age: 1800
security:
client-authorization:
access-token-uri: http://uaa/oauth/token
token-service-id: uaa
client-id: internal
client-secret: internal
mail: # specific JHipster mail property, for standard properties see MailProperties
base-url: http://127.0.0.1:8081
metrics:
logs: # Reports metrics in the logs
enabled: false
report-frequency: 60 # in seconds
logging:
use-json-format: false # By default, logs are not in Json format
logstash: # Forward logs to logstash over a socket, used by LoggingConfiguration
enabled: false
host: localhost
port: 5000
queue-size: 512
audit-events:
retention-period: 30 # Number of days before audit events are deleted.
oauth2:
signature-verification:
public-key-endpoint-uri: http://uaa/oauth/token_key
#ttl for public keys to verify JWT tokens (in ms)
ttl: 3600000
#max. rate at which public keys will be fetched (in ms)
public-key-refresh-rate-limit: 10000
web-client-configuration:
#keep in sync with UAA configuration
client-id: web_app
secret: changeit
An error occurred while I was running the project:
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'mongobee' defined in class path resource [com/fzai/fileservice/config/DatabaseConfiguration.class]: Invocation of init method failed; nested exception is com.mongodb.MongoQueryException: Query failed with error code 13 and error message 'not authorized on fileService to execute command { find: "system.indexes", filter: { ns: "fileService.dbchangelog", key: { changeId: 1, author: 1 } }, limit: 1, singleBatch: true, $db: "fileService" }' on server 42.193.124.204:27017
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1771)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:593)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:515)
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:847)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:877)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:549)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:141)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:744)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:391)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:312)
at com.fzai.fileservice.FileServiceApp.main(FileServiceApp.java:70)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.springframework.boot.devtools.restart.RestartLauncher.run(RestartLauncher.java:49)
Caused by: com.mongodb.MongoQueryException: Query failed with error code 13 and error message 'not authorized on fileService to execute command { find: "system.indexes", filter: { ns: "fileService.dbchangelog", key: { changeId: 1, author: 1 } }, limit: 1, singleBatch: true, $db: "fileService" }' on server 42.193.124.204:27017
at com.mongodb.operation.FindOperation$1.call(FindOperation.java:706)
at com.mongodb.operation.FindOperation$1.call(FindOperation.java:695)
at com.mongodb.operation.OperationHelper.withConnectionSource(OperationHelper.java:462)
at com.mongodb.operation.OperationHelper.withConnection(OperationHelper.java:406)
at com.mongodb.operation.FindOperation.execute(FindOperation.java:695)
at com.mongodb.operation.FindOperation.execute(FindOperation.java:83)
at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:179)
at com.mongodb.client.internal.FindIterableImpl.first(FindIterableImpl.java:198)
at com.github.mongobee.dao.ChangeEntryIndexDao.findRequiredChangeAndAuthorIndex(ChangeEntryIndexDao.java:35)
at com.github.mongobee.dao.ChangeEntryDao.ensureChangeLogCollectionIndex(ChangeEntryDao.java:121)
at com.github.mongobee.dao.ChangeEntryDao.connectMongoDb(ChangeEntryDao.java:61)
at com.github.mongobee.Mongobee.execute(Mongobee.java:143)
at com.github.mongobee.Mongobee.afterPropertiesSet(Mongobee.java:126)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1830)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1767)
... 19 common frames omitted
But in my other simple springboot project, I used the same configuration, which can run and use successfully:
spring:
application:
name: springboot1
data:
mongodb:
host: 42.193.124.204
port: 27017
username: admin
password: admin123
authentication-database: fileService
database: fileService
This is the user and role I created:
{
"_id" : "fileService.admin",
"userId" : UUID("03f75395-f129-4273-b6a6-b2dc3d1f7974"),
"user" : "admin",
"db" : "fileService",
"roles" : [
{
"role" : "dbOwner",
"db" : "fileService"
},
{
"role" : "readWrite",
"db" : "fileService"
}
],
"mechanisms" : [
"SCRAM-SHA-1",
"SCRAM-SHA-256"
]
}
I want to know what's wrong.

Google Cloud Deployment ,invalid_argument

I'm trying to create a cloud SQL instance by deployment API, when I try to create it directly from YAML file it is created successfully ,meanwhile when I create the instance from jinja/python file I get an error as below:
code: RESOURCE_ERROR
location: /deployments/olpr/resources/test
message: '{"ResourceType":"sqladmin.v1beta4.instance","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"Request
contains an invalid argument.","status":"INVALID_ARGUMENT","statusMessage":"Bad
Request","requestPath":"https://www.googleapis.com/sql/v1beta4/projects/project_id/instances","httpMethod":"POST"}}'
Is there any way where I can see the invalid_argument so that I can fix it.
Please help me with some valid suggestions.
The resource as below:
*resources = [
{
'name': 'test',
'type': 'sqladmin.v1beta4.instance',
'properties': {
'zone': 'europe-west1-b',
'rootPassword': '1234567' ,
'instanceType': 'CLOUD_SQL_INSTANCE',
'databaseVersion': 'SQLSERVER_2017_EXPRESS',
'backendType': 'SECOND_GEN',
'settings':{
'machineType' : 'db-custom-1-3840',
'dataDiskSizeGb': 10,
'dataDiskType': 'PD_SSD',
'ipConfiguration': {
'ipv4Enabled': False,
'privateNetwork':'projects/project_id/global/networks/project_id-vpc'
}
}
}
}
]*
**
**Yaml file:
resources:
- name: he
type: sqladmin.v1beta4.instance
properties:
region: europe-west1
zone: europe-west1-b
backendType: SECOND_GEN
instanceType: CLOUD_SQL_INSTANCE
databaseVersion: SQLSERVER_2017_EXPRESS
serviceAccountEmailAddress: user#project_id.iam.gserviceaccount.com
rootPassword: mypass
settings:
dataDiskSizeGb: 10
dataDiskType: PD_SSD
ipConfiguration:
ipv4Enabled: false
privateNetwork: vpc
kind: sql#settings
machineType: db-custom-1-3840**
**
You're not supplying a region in the Python version. Try adding `'region': 'europe-west1' to the properties.

Create Symfony Bundle for rest API

I'm working on symfony 3 project , I have a bundle for Admin dashboard and I want to create another bundle for a rest API , the main route for the dashboard is : evaluation.dev/app_dev.php/ , for the API bundle i defined a route with fosrestBundle like that : evaluation.dev/app_dev.php/api/ .
The route for the api work well but the main rout for my admin panel does not work anymore and show me an internal server error. can any one give some help ? I think I should change some thing on the configuration or routing file.
This is my routing.yml file :
fos_user_security:
resource: "#FOSUserBundle/Resources/config/routing/security.xml"
fos_user_profile:
resource: "#FOSUserBundle/Resources/config/routing/profile.xml"
prefix: /profile
fos_user_register:
resource: "#FOSUserBundle/Resources/config/routing/registration.xml"
prefix: /register
fos_user_resetting:
resource: "#FOSUserBundle/Resources/config/routing/resetting.xml"
prefix: /resetting
fos_user_change_password:
resource: "#FOSUserBundle/Resources/config/routing/change_password.xml"
prefix: /profile
fos_js_routing:
resource: "#FOSJsRoutingBundle/Resources/config/routing/routing.xml"
eval:
resource: "#EvalBundle/Controller/"
type: annotation
prefix: /
app:
resource: '#AppBundle/Controller/'
type: annotation
api:
resource: "#APIBundle/Controller/"
type: annotation
prefix: /api
here is my config.yml file :
imports:
- { resource: parameters.yml }
- { resource: security.yml }
- { resource: services.yml }
- { resource: "#EvalBundle/Resources/config/services.yml" }
- { resource: "#EvalBundle/Resources/config/entities.yml" }
# Put parameters here that don't need to change on each machine where the app is deployed
# http://symfony.com/doc/current/best_practices/configuration.html#application-related-configuration
parameters:
locale: en
assetic:
debug: '%kernel.debug%'
use_controller: '%kernel.debug%'
filters:
cssrewrite: ~
framework:
#esi: ~
#translator: { fallbacks: ['%locale%'] }
secret: '%secret%'
router:
resource: '%kernel.root_dir%/config/routing.yml'
strict_requirements: ~
form: ~
csrf_protection: ~
validation: { enable_annotations: true }
#serializer: { enable_annotations: true }
templating:
engines: ['twig']
default_locale: '%locale%'
trusted_hosts: ~
trusted_proxies: ~
session:
# http://symfony.com/doc/current/reference/configuration/framework.html#handler-id
handler_id: session.handler.native_file
save_path: "%kernel.root_dir%/../var/sessions/%kernel.environment%"
fragments: ~
http_method_override: true
assets: ~
php_errors:
log: true
translator: ~
serializer:
enabled: true
# Twig Configuration
twig:
debug: '%kernel.debug%'
strict_variables: '%kernel.debug%'
cache: false
form_themes :
- bootstrap_3_layout.html.twig
- bootstrap_3_horizontal_layout.html.twig
# Doctrine Configuration
doctrine:
dbal:
driver: pdo_mysql
host: '%database_host%'
port: '%database_port%'
dbname: '%database_name%'
user: '%database_user%'
password: '%database_password%'
charset: UTF8
mapping_types:
enum: string
# if using pdo_sqlite as your database driver:
# 1. add the path in parameters.yml
# e.g. database_path: "%kernel.root_dir%/../var/data/data.sqlite"
# 2. Uncomment database_path in parameters.yml.dist
# 3. Uncomment next line:
#path: '%database_path%'
orm:
auto_generate_proxy_classes: '%kernel.debug%'
naming_strategy: doctrine.orm.naming_strategy.underscore
auto_mapping: true
# Swiftmailer Configuration
swiftmailer:
transport: '%mailer_transport%'
host: '%mailer_host%'
username: '%mailer_user%'
password: '%mailer_password%'
spool: { type: memory }
fos_user:
db_driver: orm # other valid values are 'mongodb', 'couchdb' and 'propel'
firewall_name: main
user_class: EvalBundle\Entity\Collaborator
from_email:
address: amer.ff19#gmail.com
sender_name: amer ff
knp_paginator:
page_range: 1 # default page range used in pagination control
default_options:
page_name: page # page query parameter name
sort_field_name: sort # sort field query parameter name
sort_direction_name: direction # sort direction query parameter name
distinct: true # ensure distinct results, useful when ORM queries are using GROUP BY statements
template:
pagination: 'KnpPaginatorBundle:Pagination:twitter_bootstrap_v3_pagination.html.twig' # sliding pagination controls template
sortable: 'KnpPaginatorBundle:Pagination:sortable_link.html.twig' # sort link template
fos_rest:
routing_loader:
include_format: false
view:
view_response_listener: true
format_listener:
rules:
- { path: '^/', priorities: ['json'], fallback_format: 'json' }