Use RBD in kubernetes error - kubernetes

I follow the example to use rbd in kubernetes, but can not success. who can help me!! the error :
Nov 09 17:58:03 core-1-97 kubelet[1254]: E1109 17:58:03.289702 1254 volumes.go:114] Could not create volume builder for pod 5df3610e-86c8-11e5-bc34-002590fdf95c: can't use volume plugins for (volume.Spec){Name:(string)rbdpd VolumeSource:(api.VolumeSource){HostPath:(*api.HostPathVolumeSource)<nil> EmptyDir:(*api.EmptyDirVolumeSource)<nil> GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> GitRepo:(*api.GitRepoVolumeSource)<nil> Secret:(*api.SecretVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> PersistentVolumeClaim:(*api.PersistentVolumeClaimVolumeSource)<nil> RBD:(*api.RBDVolumeSource){CephMonitors:([]string)[10.14.1.33:6789 10.14.1.35:6789 10.14.1.36:6789] RBDImage:(string)foo FSType:(string)ext4 RBDPool:(string)rbd RadosUser:(string)admin Keyring:(string) SecretRef:(*api.LocalObjectReference){Name:(string)ceph-secret} ReadOnly:(bool)true}} PersistentVolumeSource:(api.PersistentVolumeSource){GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> HostPath:(*api.HostPathVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> RBD:(*api.RBDVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil>}}: no volume plugin matched
Nov 09 17:58:03 core-1-97 kubelet[1254]: E1109 17:58:03.289770 1254 kubelet.go:1210] Unable to mount volumes for pod "rbd2_default": can't use volume plugins for (volume.Spec){Name:(string)rbdpd VolumeSource:(api.VolumeSource){HostPath:(*api.HostPathVolumeSource)<nil> EmptyDir:(*api.EmptyDirVolumeSource)<nil> GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> GitRepo:(*api.GitRepoVolumeSource)<nil> Secret:(*api.SecretVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> PersistentVolumeClaim:(*api.PersistentVolumeClaimVolumeSource)<nil> RBD:(*api.RBDVolumeSource){CephMonitors:([]string)[10.14.1.33:6789 10.14.1.35:6789 10.14.1.36:6789] RBDImage:(string)foo FSType:(string)ext4 RBDPool:(string)rbd RadosUser:(string)admin Keyring:(string) SecretRef:(*api.LocalObjectReference){Name:(string)ceph-secret} ReadOnly:(bool)true}} PersistentVolumeSource:(api.PersistentVolumeSource){GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> HostPath:(*api.HostPathVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> RBD:(*api.RBDVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil>}}: no volume plugin matched; skipping pod
Nov 09 17:58:03 core-1-97 kubelet[1254]: E1109 17:58:03.299458 1254 pod_workers.go:111] Error syncing pod 5df3610e-86c8-11e5-bc34-002590fdf95c, skipping: can't use volume plugins for (volume.Spec){Name:(string)rbdpd VolumeSource:(api.VolumeSource){HostPath:(*api.HostPathVolumeSource)<nil> EmptyDir:(*api.EmptyDirVolumeSource)<nil> GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> GitRepo:(*api.GitRepoVolumeSource)<nil> Secret:(*api.SecretVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> PersistentVolumeClaim:(*api.PersistentVolumeClaimVolumeSource)<nil> RBD:(*api.RBDVolumeSource){CephMonitors:([]string)[10.14.1.33:6789 10.14.1.35:6789 10.14.1.36:6789] RBDImage:(string)foo FSType:(string)ext4 RBDPool:(string)rbd RadosUser:(string)admin Keyring:(string) SecretRef:(*api.LocalObjectReference){Name:(string)ceph-secret} ReadOnly:(bool)true}} PersistentVolumeSource:(api.PersistentVolumeSource){GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> HostPath:(*api.HostPathVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> RBD:(*api.RBDVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil>}}: no volume plugin matched
And The template file I used rbd-with-secret.json:
core#core-1-94 ~/kubernetes/examples/rbd $ cat rbd-with-secret.json
{
"apiVersion": "v1",
"id": "rbdpd2",
"kind": "Pod",
"metadata": {
"name": "rbd2"
},
"spec": {
"nodeSelector": {"kubernetes.io/hostname" :"10.12.1.97"},
"containers": [
{
"name": "rbd-rw",
"image": "kubernetes/pause",
"volumeMounts": [
{
"mountPath": "/mnt/rbd",
"name": "rbdpd"
}
]
}
],
"volumes": [
{
"name": "rbdpd",
"rbd": {
"monitors": [
"10.14.1.33:6789",
"10.14.1.35:6789",
"10.14.1.36:6789"
],
"pool": "rbd",
"image": "foo",
"user": "admin",
"secretRef": {"name": "ceph-secret"},
"fsType": "ext4",
"readOnly": true
}
}
]
}
}
The secret:
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
data:
key: QVFBemV6bFdZTXdXQWhBQThxeG1IT2NKa0QrYnE0K3RZUmtsVncK
the ceph config is in /etc/ceph/
core#core-1-94 ~/kubernetes/examples/rbd $ ls -alh /etc/ceph
total 20K
drwxr-xr-x 2 root root 4.0K Nov 6 18:38 .
drwxr-xr-x 26 root root 4.0K Nov 9 17:07 ..
-rw------- 1 root root 63 Nov 4 11:27 ceph.client.admin.keyring
-rw-r--r-- 1 root root 264 Nov 6 18:38 ceph.conf
-rw-r--r-- 1 root root 384 Nov 6 14:35 ceph.conf.orig
-rw------- 1 root root 0 Nov 4 11:27 tmpkqDKwf
and the key as :
core#core-1-94 ~/kubernetes/examples/rbd $ sudo cat
/etc/ceph/ceph.client.admin.keyring
[client.admin]
key = AQAzezlWYMwWAhAA8qxmHOcJkD+bq4+tYRklVw==

You'll get "no volume plugins matched" if the rbd command isn't installed and in the path.
As the example specifies, you need to ensure that ceph is installed on your Kubernetes nodes. For instance, in Fedora:
$ sudo yum -y install ceph-common
I'll file an issue to clarify the error messages.

Related

Monstache not initiating the first synchronisation of current data from MongoDB and keeps waiting for events on the change stream

It's my first time to use monstache.
In fact I've migrated my infrastructure from on premise to the cloud and I'm now using mongoDB Atlas, and AWS opensearch.
I've installed monstache on an aws ec2 instance and well configured it. Everything seems working and monstache is connected to Elasticsearch and MongoDB, but it's indexing documents that have been migratred into mongoDB atlas in Elasticsearch. It keeps waiting for events on my collection/index like this
[ec2-user#ip-172-31-1-200 ~]$ journalctl -u monstache.service -f
-- Logs begin at Wed 2022-11-09 10:22:04 UTC. --
Jan 26 08:54:00 ip-172-31-1-200.eu-west-3.compute.internal systemd[1]: Starting monstache sync service...
Jan 26 08:54:00 ip-172-31-1-200.eu-west-3.compute.internal monstache[27813]: INFO 2023/01/26 08:54:00 Started monstache version 6.1.0
Jan 26 08:54:00 ip-172-31-1-200.eu-west-3.compute.internal monstache[27813]: INFO 2023/01/26 08:54:00 Successfully connected to MongoDB version 4.4.18
Jan 26 08:54:01 ip-172-31-1-200.eu-west-3.compute.internal monstache[27813]: INFO 2023/01/26 08:54:01 Successfully connected to Elasticsearch version 7.10.2
Jan 26 08:54:01 ip-172-31-1-200.eu-west-3.compute.internal systemd[1]: Started monstache sync service.
Jan 26 08:54:01 ip-172-31-1-200.eu-west-3.compute.internal monstache[27813]: INFO 2023/01/26 08:54:01 Joined cluster HA
Jan 26 08:54:01 ip-172-31-1-200.eu-west-3.compute.internal monstache[27813]: INFO 2023/01/26 08:54:01 Starting work for cluster HA
Jan 26 08:54:01 ip-172-31-1-200.eu-west-3.compute.internal monstache[27813]: INFO 2023/01/26 08:54:01 Listening for events
Jan 26 08:54:01 ip-172-31-1-200.eu-west-3.compute.internal monstache[27813]: INFO 2023/01/26 08:54:01 Watching changes on collection wtlive.myuser
Jan 26 08:54:01 ip-172-31-1-200.eu-west-3.compute.internal monstache[27813]: INFO 2023/01/26 08:54:01 Resuming from timestamp {T:1674723241 I:1}
Should I absolutely initiate a write on the mongoDB collection for monstache to start syncing? Why doesn't it start syncing current data from mongoDB?
My elasticsearch still shows 0 document count while the collection is full of document in mongoDB.
[ec2-user#ip-172-31-0-5 ~]$ curl --insecure -u es-appuser https://vpc-wtlive-domain-staging-om2cbdeex4qk6trkdrcb3dg4vm.eu-west-3.es.amazonaws.com/_cat/indices?v
Enter host password for user 'es-appuser':
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open wtlive.myuser YzqLx9_uTZ2qFVjFF2CMag 1 1 0 0 416b 208b
green open .opendistro_security Jb1fLqGjRd2vvluoX-ZgKw 1 1 9 4 129kb 64.4kb
green open .kibana_1 v6WdqQDvSN2L16EZTXxuHQ 1 1 30 2 70.4kb 33.4kb
green open .kibana_252235597_esappuser_1 OY1bbDGvTqK8oEgwopzbhQ 1 1 1 0 10.1kb 5kb
[ec2-user#ip-172-31-0-5 ~]$
Here is my monstache configuration :
[ec2-user#ip-172-31-1-200 ~]$ cat monstache/wtlive_pipeline.toml
enable-http-server = true
http-server-addr = ":8888"
#direct-read-namespaces = ["wtlive.myuser"]
change-stream-namespaces = ["wtlive.myuser"]
namespace-regex = '^wtlive.myuser$'
cluster-name="HA"
resume = true
replay = false
resume-write-unsafe = false
exit-after-direct-reads = false
elasticsearch-user = "es-appuser"
elasticsearch-password = "9V#xxxxxx"
elasticsearch-urls = ["https://vpc-wtlive-domain-staging-om2cbdeek6trkdrcb3dg4vm.eu-west-3.es.amazonaws.com"]
mongo-url = "mongodb://admin:VYn7ZD4CHDh8#wtlive-dedicated-shard-00-00.ynxpn.mongodb.net:27017,wtlive-dedicated-shard-00-01.ynxpn.mongodb.net:27017,wtlive-dedicated-shard-00-02.ynxpn.mongodb.net:27017/?tls=true&replicaSet=atlas-lmkye1-shard-0&authSource=admin&retryWrites=true&w=majority&tlsCAFile=/home/ec2-user/mongodb-ca.pem"
#[logs]
#info = "/home/ec2-user/logs/monstache/info.log"
#error = "/home/ec2-user/logs/monstache/error.log"
#warn = "/home/ec2-user/logs/monstache/warn.log"
#[[mapping]]
#namespace = "wtlive.myuser"
#index = "wtlive.myuser"
[[pipeline]]
namespace = "wtlive.myuser"
script = """
module.exports = function(ns, changeStream) {
if (changeStream) {
return [
{
$project: {
_id: 1,
operationType : 1,
clusterTime : 1,
documentKey : 1,
to : 1,
updateDescription : 1,
txnNumber : 1,
lsid : 1,
"fullDocument._id": 1,
"fullDocument.created": 1,
"fullDocument.lastVisit": 1,
"fullDocument.verified": 1,
"fullDocument.device.locale": "$fullDocument.device.locale",
"fullDocument.device.country": "$fullDocument.device.country",
"fullDocument.device.tz": "$fullDocument.device.tz",
"fullDocument.device.latLonCountry": "$fullDocument.device.latLonCountry",
"fullDocument.details.firstname": "$fullDocument._details.firstname",
"fullDocument.details.gender": "$fullDocument._details.gender",
"fullDocument.details.category": "$fullDocument._details.category",
"fullDocument.details.dob": "$fullDocument._details.dob",
"fullDocument.details.lookingFor": "$fullDocument._details.lookingFor",
"fullDocument.details.height": "$fullDocument._details.height",
"fullDocument.details.weight": "$fullDocument._details.weight",
"fullDocument.details.cigarette": "$fullDocument._details.cigarette",
"fullDocument.details.categorizedBy": "$fullDocument._details.categorizedBy",
"fullDocument.details.origin": "$fullDocument._details.origin",
"fullDocument.details.city": "$fullDocument._details.city",
"fullDocument.details.country": "$fullDocument._details.country",
"fullDocument.lifeSkills.educationLevel": "$fullDocument._lifeSkills.educationLevel",
"fullDocument.lifeSkills.pets": "$fullDocument._lifeSkills.pets",
"fullDocument.lifeSkills.religion": "$fullDocument._lifeSkills.religion",
"fullDocument.loveLife.children": "$fullDocument._loveLife.children",
"fullDocument.loveLife.relationType": "$fullDocument._loveLife.relationType",
"fullDocument.searchCriteria": "$fullDocument._searchCriteria",
"fullDocument.blocked" : 1,
"fullDocument.capping" : 1,
"fullDocument.fillingScore" : 1,
"fullDocument.viewed" : 1,
"fullDocument.likes" : 1,
"fullDocument.matches" : 1,
"fullDocument.blacklisted" : 1,
"fullDocument.uploadsList._id" : 1,
"fullDocument.uploadsList.status" : 1,
"fullDocument.uploadsList.url" : 1,
"fullDocument.uploadsList.position" : 1,
"fullDocument.uploadsList.imageSet" : 1,
"fullDocument.location" : 1,
"fullDocument.searchZone" : 1,
"fullDocument.locationPoint" : "$fullDocument.location.coordinates",
"fullDocument.selfieDateUpload" : 1,
"ns": 1
}
}
]
} else {
return [
{
$project: {
_id: 1,
"_id": 1,
"created": 1,
"lastVisit": 1,
"verified": 1,
"device.locale": "$device.locale",
"device.country": "$device.country",
"device.tz": "$device.tz",
"device.latLonCountry": "$device.latLonCountry",
"details.firstname": "$_details.firstname",
"details.gender": "$_details.gender",
"details.category": "$_details.category",
"details.dob": "$_details.dob",
"details.lookingFor": "$_details.lookingFor",
"details.height": "$_details.height",
"details.weight": "$_details.weight",
"details.cigarette": "$_details.cigarette",
"details.categorizedBy": "$_details.categorizedBy",
"details.origin": "$_details.origin",
"details.city": "$_details.city",
"details.country": "$_details.country",
"lifeSkills.educationLevel": "$_lifeSkills.educationLevel",
"lifeSkills.pets": "$_lifeSkills.pets",
"lifeSkills.religion": "$_lifeSkills.religion",
"loveLife.children": "$_loveLife.children",
"loveLife.relationType": "$_loveLife.relationType",
"searchCriteria": "$_searchCriteria",
"blocked" : 1,
"capping" : 1,
"fillingScore" : 1,
"viewed" : 1,
"likes" : 1,
"matches" : 1,
"blacklisted" : 1,
"uploadsList._id" : 1,
"uploadsList.status" : 1,
"uploadsList.url" : 1,
"uploadsList.position" : 1,
"uploadsList.imageSet" : 1,
"location" : 1,
"searchZone" : 1,
"selfieDateUpload" : 1,
"locationPoint" : "$location.coordinates"
}
}
]
}
}
"""
What could be the issue? And what action should I take from here please?
By uncommenting the #direct-read-namespaces = ["wtlive.myuser"] line, monstache can now do the initial sync, and everything is going well.
I'll comment out aigain and restart monstache service after the initial sync, to avoid re-syncing from scratch.

use a `task` to export a workspace-specific environment variable: $PATH?

I have several workspaces that have unique environments.
When I load a workspace, I would like to update the linux $PATH accordingly, however, I tried the following:
### workspace.code-workspace
{
"folders": [
{
"path": "."
}
],
"settings": {},
"tasks": [
{
"command": "export $PATH=${workspaceFolder}/bin:$PATH"
}
]
}
Which fails according to:
me#pc:~/Containers/cuda_devcon/project$ ls -alvh ./bin/julia
lrwxrwxrwx 1 me me 21 Jan 29 17:38 ./bin/julia -> julia-1.6.7/bin/julia
me#pc:~/Containers/cuda_devcon/project$ which julia
me#pc:~/Containers/cuda_devcon/project$ julia
Command 'julia' not found, but can be installed with:
sudo snap install julia # version 1.8.5, or
sudo apt install julia # version 1.4.1+dfsg-1
See 'snap info julia' for additional versions.

Karma webpack compiled but not executed

I am running javascript tests with functionality compiled by webpack in Karma. It seems that sources are compiled but not processed by karma, no test run.
testing.webpack.js
module.exports = {
devtool: 'inline-source-map',
resolve: {
extensions: ['.js']
},
module: {
rules: [
{
test: /\.js$/,
exclude: [/node_modules/],
use: [{
loader: 'babel-loader',
}]
}
]
}
};
There is my karma.conf.js
const webpackConfig = require('./testing.webpack.js');
module.exports = function (config) {
config.set({
basePath: './',
coverageReporter: {
dir: 'tmp/coverage/',
reporters: [
{ type: 'html', subdir: 'report-html' },
{ type: 'lcov', subdir: 'report-lcov' }
],
instrumenterOptions: {
istanbul: { noCompact: true }
}
},
files: [
'spec/**/*.spec.js'
],
frameworks: ['should', 'jasmine', 'mocha'],
reporters: ['mocha', 'coverage'],
preprocessors: {
'spec/**/*.spec.js': ['webpack', 'sourcemap']
},
plugins: [
'karma-webpack',
'karma-jasmine',
'karma-mocha',
'karma-should',
'karma-coverage',
'karma-chrome-launcher',
'karma-phantomjs-launcher',
'karma-mocha-reporter',
'karma-sourcemap-loader'
],
webpack: webpackConfig,
webpackMiddleware: {
stats: 'errors-only'
}
});
return config;
};
I receive the following output:
npx karma start karma.conf.js --single-run --browsers Chrome --debug
14 12 2020 15:54:55.608:DEBUG [config]: Loading config /home/victor/github/victor-shelepen/instance-locator/karma.conf.js
14 12 2020 15:54:55.612:DEBUG [config]: autoWatch set to false, because of singleRun
14 12 2020 15:54:55.613:DEBUG [karma-server]: Final config Config {
LOG_DISABLE: 'OFF',
LOG_ERROR: 'ERROR',
LOG_WARN: 'WARN',
LOG_INFO: 'INFO',
LOG_DEBUG: 'DEBUG',
frameworks: [ 'should', 'jasmine', 'mocha' ],
protocol: 'http:',
port: 9876,
listenAddress: '0.0.0.0',
hostname: 'localhost',
httpsServerConfig: {},
basePath: '/home/victor/github/victor-shelepen/instance-locator',
files: [
Pattern {
pattern: '/home/victor/github/victor-shelepen/instance-locator/spec/**/*.spec.js',
served: true,
included: true,
watched: false,
nocache: false,
weight: [ 1, 1, 1, 0, 0, 0 ],
type: undefined,
isBinary: undefined
}
],
browserConsoleLogOptions: { level: 'debug', format: '%b %T: %m', terminal: true },
customContextFile: null,
customDebugFile: null,
customClientContextFile: null,
exclude: [
'/home/victor/github/victor-shelepen/instance-locator/karma.conf.js'
],
logLevel: 'DEBUG',
colors: true,
autoWatch: false,
autoWatchBatchDelay: 250,
restartOnFileChange: false,
usePolling: true,
reporters: [ 'mocha', 'coverage' ],
singleRun: true,
browsers: [ 'Chrome' ],
captureTimeout: 60000,
pingTimeout: 5000,
proxies: {},
proxyValidateSSL: true,
preprocessors: [Object: null prototype] {
'/home/victor/github/victor-shelepen/instance-locator/spec/**/*.spec.js': [ 'webpack', 'sourcemap' ]
},
preprocessor_priority: {},
urlRoot: '/',
upstreamProxy: undefined,
reportSlowerThan: 0,
loggers: [
{
type: 'console',
layout: { type: 'pattern', pattern: '%[%d{DATE}:%p [%c]: %]%m' }
}
],
transports: [ 'polling', 'websocket' ],
forceJSONP: false,
plugins: [
'karma-webpack',
'karma-jasmine',
'karma-mocha',
'karma-should',
'karma-coverage',
'karma-chrome-launcher',
'karma-phantomjs-launcher',
'karma-mocha-reporter',
'karma-sourcemap-loader'
],
client: {
args: [],
useIframe: true,
runInParent: false,
captureConsole: true,
clearContext: true
},
defaultClient: {
args: [],
useIframe: true,
runInParent: false,
captureConsole: true,
clearContext: true
},
browserDisconnectTimeout: 2000,
browserDisconnectTolerance: 0,
browserNoActivityTimeout: 30000,
processKillTimeout: 2000,
concurrency: Infinity,
failOnEmptyTestSuite: true,
retryLimit: 2,
detached: false,
crossOriginAttribute: true,
browserSocketTimeout: 20000,
cmd: 'start',
debug: true,
configFile: '/home/victor/github/victor-shelepen/instance-locator/karma.conf.js',
coverageReporter: {
dir: 'tmp/coverage/',
reporters: [
{ type: 'html', subdir: 'report-html' },
{ type: 'lcov', subdir: 'report-lcov' }
],
instrumenterOptions: { istanbul: { noCompact: true } }
},
webpack: {
devtool: 'inline-source-map',
resolve: { extensions: [ '.js' ] },
module: {
rules: [
{
test: /\.js$/,
exclude: [ /node_modules/ ],
use: [ { loader: 'babel-loader' } ]
}
]
}
},
webpackMiddleware: { stats: 'errors-only' }
}
14 12 2020 15:54:55.614:DEBUG [plugin]: Loading plugin karma-webpack.
14 12 2020 15:54:55.664:DEBUG [plugin]: Loading plugin karma-jasmine.
14 12 2020 15:54:55.665:DEBUG [plugin]: Loading plugin karma-mocha.
14 12 2020 15:54:55.666:DEBUG [plugin]: Loading plugin karma-should.
14 12 2020 15:54:55.667:DEBUG [plugin]: Loading plugin karma-coverage.
14 12 2020 15:54:55.914:DEBUG [plugin]: Loading plugin karma-chrome-launcher.
14 12 2020 15:54:55.920:DEBUG [plugin]: Loading plugin karma-phantomjs-launcher.
14 12 2020 15:54:55.938:DEBUG [plugin]: Loading plugin karma-mocha-reporter.
14 12 2020 15:54:55.941:DEBUG [plugin]: Loading plugin karma-sourcemap-loader.
14 12 2020 15:54:55.956:DEBUG [web-server]: Instantiating middleware
14 12 2020 15:54:55.957:DEBUG [reporter]: Trying to load reporter: mocha
14 12 2020 15:54:55.958:DEBUG [reporter]: Trying to load color-version of reporter: mocha (mocha_color)
14 12 2020 15:54:55.959:DEBUG [reporter]: Couldn't load color-version.
14 12 2020 15:54:55.959:DEBUG [reporter]: Trying to load reporter: coverage
14 12 2020 15:54:55.959:DEBUG [reporter]: Trying to load color-version of reporter: coverage (coverage_color)
14 12 2020 15:54:55.959:DEBUG [reporter]: Couldn't load color-version.
START:
Webpack bundling...
asset runtime.js 11.4 KiB [compared for emit] (name: runtime)
asset commons.js 989 bytes [compared for emit] (name: commons) (id hint: commons)
asset another.spec.4218216441.js 175 bytes [compared for emit] (name: another.spec.4218216441)
Entrypoint another.spec.4218216441 12.5 KiB = runtime.js 11.4 KiB commons.js 989 bytes another.spec.4218216441.js 175 bytes
webpack 5.10.1 compiled successfully in 204 ms
14 12 2020 15:54:56.659:INFO [karma-server]: Karma v5.2.3 server started at http://localhost:9876/
14 12 2020 15:54:56.659:INFO [launcher]: Launching browsers Chrome with concurrency unlimited
14 12 2020 15:54:56.662:INFO [launcher]: Starting browser Chrome
14 12 2020 15:54:56.662:DEBUG [launcher]: null -> BEING_CAPTURED
14 12 2020 15:54:56.663:DEBUG [temp-dir]: Creating temp dir at /tmp/karma-27533261
14 12 2020 15:54:56.663:DEBUG [launcher]: google-chrome --user-data-dir=/tmp/karma-27533261 --enable-automation --no-default-browser-check --no-first-run --disable-default-apps --disable-popup-blocking --disable-translate --disable-background-timer-throttling --disable-renderer-backgrounding --disable-device-discovery-notifications http://localhost:9876/?id=27533261
14 12 2020 15:54:57.068:DEBUG [web-server]: serving: /home/victor/github/victor-shelepen/instance-locator/node_modules/karma/static/client.html
14 12 2020 15:54:57.150:DEBUG [web-server]: serving: /home/victor/github/victor-shelepen/instance-locator/node_modules/karma/static/karma.js
14 12 2020 15:54:57.229:DEBUG [karma-server]: A browser has connected on socket RYAt3YKj13i66X8RAAAA
14 12 2020 15:54:57.278:DEBUG [Chrome 87.0.4280.88 (Linux x86_64)]: undefined -> CONNECTED
14 12 2020 15:54:57.279:INFO [Chrome 87.0.4280.88 (Linux x86_64)]: Connected on socket RYAt3YKj13i66X8RAAAA with id 27533261
14 12 2020 15:54:57.280:DEBUG [launcher]: BEING_CAPTURED -> CAPTURED
14 12 2020 15:54:57.280:DEBUG [launcher]: Chrome (id 27533261) captured in 0.621 secs
14 12 2020 15:54:57.280:DEBUG [Chrome 87.0.4280.88 (Linux x86_64)]: CONNECTED -> CONFIGURING
14 12 2020 15:54:57.289:DEBUG [web-server]: serving: /home/victor/github/victor-shelepen/instance-locator/node_modules/karma/static/favicon.ico
14 12 2020 15:54:57.292:DEBUG [web-server]: upgrade /socket.io/?EIO=3&transport=websocket&sid=RYAt3YKj13i66X8RAAAA
14 12 2020 15:54:57.323:DEBUG [middleware:karma]: custom files null null null
14 12 2020 15:54:57.323:DEBUG [middleware:karma]: Serving static request /context.html
14 12 2020 15:54:57.325:DEBUG [web-server]: serving: /home/victor/github/victor-shelepen/instance-locator/node_modules/karma/static/context.html
14 12 2020 15:54:57.346:DEBUG [middleware:source-files]: Requesting /base/node_modules/mocha/mocha.js?143074c949211f445d6c1a8a431990c9849bf6ae
14 12 2020 15:54:57.347:DEBUG [middleware:source-files]: Fetching /home/victor/github/victor-shelepen/instance-locator/node_modules/mocha/mocha.js
14 12 2020 15:54:57.347:DEBUG [web-server]: serving (cached): /home/victor/github/victor-shelepen/instance-locator/node_modules/mocha/mocha.js
14 12 2020 15:54:57.352:DEBUG [middleware:source-files]: Requesting /base/node_modules/karma-mocha/lib/adapter.js?a0f4bbc139407501892ac58d70c2791e7adec343
14 12 2020 15:54:57.352:DEBUG [middleware:source-files]: Fetching /home/victor/github/victor-shelepen/instance-locator/node_modules/karma-mocha/lib/adapter.js
14 12 2020 15:54:57.352:DEBUG [web-server]: serving (cached): /home/victor/github/victor-shelepen/instance-locator/node_modules/karma-mocha/lib/adapter.js
14 12 2020 15:54:57.353:DEBUG [middleware:source-files]: Requesting /base/node_modules/jasmine-core/lib/jasmine-core/jasmine.js?8f66117bbfbdf7b03a8f43bc667e3a4421ce15de
14 12 2020 15:54:57.353:DEBUG [middleware:source-files]: Fetching /home/victor/github/victor-shelepen/instance-locator/node_modules/jasmine-core/lib/jasmine-core/jasmine.js
14 12 2020 15:54:57.354:DEBUG [web-server]: serving (cached): /home/victor/github/victor-shelepen/instance-locator/node_modules/jasmine-core/lib/jasmine-core/jasmine.js
14 12 2020 15:54:57.354:DEBUG [middleware:source-files]: Requesting /base/node_modules/karma-jasmine/lib/boot.js?760d54bbca4f739f1f8b252c1636d76201cc4e88
14 12 2020 15:54:57.355:DEBUG [middleware:source-files]: Fetching /home/victor/github/victor-shelepen/instance-locator/node_modules/karma-jasmine/lib/boot.js
14 12 2020 15:54:57.355:DEBUG [web-server]: serving (cached): /home/victor/github/victor-shelepen/instance-locator/node_modules/karma-jasmine/lib/boot.js
14 12 2020 15:54:57.356:DEBUG [web-server]: serving: /home/victor/github/victor-shelepen/instance-locator/node_modules/karma/static/context.js
14 12 2020 15:54:57.370:DEBUG [middleware:source-files]: Requesting /base/node_modules/karma-jasmine/lib/adapter.js?c22f41e6dc6770beb0be7c86dfade9637bce9290
14 12 2020 15:54:57.370:DEBUG [middleware:source-files]: Fetching /home/victor/github/victor-shelepen/instance-locator/node_modules/karma-jasmine/lib/adapter.js
14 12 2020 15:54:57.370:DEBUG [web-server]: serving (cached): /home/victor/github/victor-shelepen/instance-locator/node_modules/karma-jasmine/lib/adapter.js
14 12 2020 15:54:57.372:DEBUG [middleware:source-files]: Requesting /base/node_modules/should/should.js?1aa5493eba423eb3fbfa86274d47aff5d2defc34
14 12 2020 15:54:57.372:DEBUG [middleware:source-files]: Fetching /home/victor/github/victor-shelepen/instance-locator/node_modules/should/should.js
14 12 2020 15:54:57.373:DEBUG [web-server]: serving (cached): /home/victor/github/victor-shelepen/instance-locator/node_modules/should/should.js
14 12 2020 15:54:57.374:DEBUG [middleware:source-files]: Requesting /absoluteanother.spec.4218216441.js?144f72c8ebc6aafdd231efe77b325a86fb00deba
14 12 2020 15:54:57.374:DEBUG [middleware:source-files]: Fetching another.spec.4218216441.js
14 12 2020 15:54:57.374:DEBUG [web-server]: serving (cached): another.spec.4218216441.js
14 12 2020 15:54:57.444:DEBUG [Chrome 87.0.4280.88 (Linux x86_64)]: CONFIGURING -> EXECUTING
14 12 2020 15:54:57.446:DEBUG [Chrome 87.0.4280.88 (Linux x86_64)]: EXECUTING -> CONNECTED
14 12 2020 15:54:57.447:DEBUG [launcher]: CAPTURED -> BEING_KILLED
14 12 2020 15:54:57.447:DEBUG [launcher]: BEING_KILLED -> BEING_FORCE_KILLED
Finished in 0.002 secs / 0 secs # 15:54:57 GMT+0200 (Eastern European Standard Time)
SUMMARY:
✔ 0 tests completed
14 12 2020 15:54:57.456:DEBUG [karma-server]: Run complete, exiting.
14 12 2020 15:54:57.457:DEBUG [launcher]: Disconnecting all browsers
14 12 2020 15:54:57.457:DEBUG [launcher]: BEING_FORCE_KILLED -> BEING_FORCE_KILLED
14 12 2020 15:54:57.457:DEBUG [proxy]: Destroying proxy agents
14 12 2020 15:54:57.486:DEBUG [coverage]: Writing coverage to /home/victor/github/victor-shelepen/instance-locator/tmp/coverage/report-html
14 12 2020 15:54:57.492:DEBUG [coverage]: Writing coverage to /home/victor/github/victor-shelepen/instance-locator/tmp/coverage/report-lcov
14 12 2020 15:54:57.500:DEBUG [launcher]: Process Chrome exited with code 0 and signal null
14 12 2020 15:54:57.500:DEBUG [temp-dir]: Cleaning temp dir /tmp/karma-27533261
14 12 2020 15:54:57.536:DEBUG [launcher]: Finished all browsers
14 12 2020 15:54:57.537:DEBUG [launcher]: BEING_FORCE_KILLED -> FINISHED
14 12 2020 15:54:57.537:DEBUG [launcher]: FINISHED -> FINISHED
I see that it has been compiled. another.spec.4218216441.js
another.spec.js
describe('Testing', () => {
it('G', () => {
should(1).be(1);
});
});
But no test run.
I will be pleased with a tip. Thank you.
Previously when using the alpha version of karma-webpack 5, if you did not include 'webpack' as a framework in your karma configuration everything would build, but no tests would be run just like this. 5.0.0 stable has been released that addresses this issue and fixes that on the fly. If you update to that it should work fine.

How to pass extra configuration to RabbitMQ with Helm?

I'm using this chart: https://github.com/helm/charts/tree/master/stable/rabbitmq to deploy a cluster of 3 RabbitMQ nodes on Kubernetes. My intention is to have all the queues mirrored within 2 nodes in the cluster.
Here's the command I use to run Helm: helm install --name rabbitmq-local -f rabbitmq-values.yaml stable/rabbitmq
And here's the content of rabbitmq-values.yaml:
persistence:
enabled: true
resources:
requests:
memory: 256Mi
cpu: 100m
replicas: 3
rabbitmq:
extraConfiguration: |-
{
"policies": [
{
"name": "queue-mirroring-exactly-two",
"pattern": "^ha\.",
"vhost": "/",
"definition": {
"ha-mode": "exactly",
"ha-params": 2
}
}
]
}
However, the nodes fail to start due to some parsing errors, and they stay in crash loop. Here's the output of kubectl logs rabbitmq-local-0:
BOOT FAILED
===========
Config file generation failed:
=CRASH REPORT==== 23-Jul-2019::15:32:52.880991 ===
crasher:
initial call: lager_handler_watcher:init/1
pid: <0.95.0>
registered_name: []
exception exit: noproc
in function gen:do_for_proc/2 (gen.erl, line 228)
in call from gen_event:rpc/2 (gen_event.erl, line 239)
in call from lager_handler_watcher:install_handler2/3 (src/lager_handler_watcher.erl, line 117)
in call from lager_handler_watcher:init/1 (src/lager_handler_watcher.erl, line 51)
in call from gen_server:init_it/2 (gen_server.erl, line 374)
in call from gen_server:init_it/6 (gen_server.erl, line 342)
ancestors: [lager_handler_watcher_sup,lager_sup,<0.87.0>]
message_queue_len: 0
messages: []
links: [<0.90.0>]
dictionary: []
trap_exit: false
status: running
heap_size: 610
stack_size: 27
reductions: 228
neighbours:
15:32:53.679 [error] Syntax error in /opt/bitnami/rabbitmq/etc/rabbitmq/rabbitmq.conf after line 14 column 1, parsing incomplete
=SUPERVISOR REPORT==== 23-Jul-2019::15:32:53.681369 ===
supervisor: {local,gr_counter_sup}
errorContext: child_terminated
reason: killed
offender: [{pid,<0.97.0>},
{id,gr_lager_default_tracer_counters},
{mfargs,{gr_counter,start_link,
[gr_lager_default_tracer_counters]}},
{restart_type,transient},
{shutdown,brutal_kill},
{child_type,worker}]
=SUPERVISOR REPORT==== 23-Jul-2019::15:32:53.681514 ===
supervisor: {local,gr_param_sup}
errorContext: child_terminated
reason: killed
offender: [{pid,<0.96.0>},
{id,gr_lager_default_tracer_params},
{mfargs,{gr_param,start_link,[gr_lager_default_tracer_params]}},
{restart_type,transient},
{shutdown,brutal_kill},
{child_type,worker}]
If I remove the rabbitmq.extraConfiguration part, the nodes start properly, so it must be something wrong with the way I'm typing in the policy. Any idea what I'm doing wrong?
Thank you.
According to https://github.com/helm/charts/tree/master/stable/rabbitmq#load-definitions, it is possible to link a JSON configuration as extraConfiguration. So we ended up with this setup that works:
rabbitmq-values.yaml:
rabbitmq:
loadDefinition:
enabled: true
secretName: rabbitmq-load-definition
extraConfiguration:
management.load_definitions = /app/load_definition.json
rabbitmq-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: rabbitmq-load-definition
type: Opaque
stringData:
load_definition.json: |-
{
"vhosts": [
{
"name": "/"
}
],
"policies": [
{
"name": "queue-mirroring-exactly-two",
"pattern": "^ha\.",
"vhost": "/",
"definition": {
"ha-mode": "exactly",
"ha-params": 2
}
}
]
}
The secret must be loaded into Kubernetes before the Helm chart is played, which goes something like this: kubectl apply -f ./rabbitmq-secret.yaml.
You can use config default of HelmChart
If needed, you can use extraSecrets to let the chart create the secret for you. This way, you don't need to manually create it before deploying a release. For example :
extraSecrets:
load-definition:
load_definition.json: |
{
"vhosts": [
{
"name": "/"
}
]
}
rabbitmq:
loadDefinition:
enabled: true
secretName: load-definition
extraConfiguration: |
management.load_definitions = /app/load_definition.json
https://github.com/helm/charts/tree/master/stable/rabbitmq
Instead of using extraConfiguration, use advancedConfiguration, you should put all these info in this section as it is for classic config format (erlang)

Symfony app in Docker doesn't respond in URL call

[SOLVED]
I want to add my code of Symfony application with MongoDB in a Docker image.
After I build the image of application and I recived:
PS E:\myapi> docker-compose up
Starting mongo
Starting myapi_web_server_1
Attaching to mongo, myapi_web_server_1
mongo | 2017-04-21T13:36:23.464+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=37e6234dbaf5
mongo | 2017-04-21T13:36:23.464+0000 I CONTROL [initandlisten] db version v3.0.14
mongo | 2017-04-21T13:36:23.464+0000 I CONTROL [initandlisten] git version: 08352afcca24bfc145240a0fac9d28b978ab77f3
mongo | 2017-04-21T13:36:23.464+0000 I CONTROL [initandlisten] build info: Linux ip-10-30-223-232 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1 x86_64 BOOST_LIB_VERSION=1_49
mongo | 2017-04-21T13:36:23.464+0000 I CONTROL [initandlisten] allocator: tcmalloc
mongo | 2017-04-21T13:36:23.464+0000 I CONTROL [initandlisten] options: { storage: { mmapv1: { smallFiles: true } } }
mongo | 2017-04-21T13:36:23.476+0000 I JOURNAL [initandlisten] journal dir=/data/db/journal
mongo | 2017-04-21T13:36:23.476+0000 I JOURNAL [initandlisten] recover : no journal files present, no recovery needed
mongo | 2017-04-21T13:36:23.742+0000 I JOURNAL [durability] Durability thread started
mongo | 2017-04-21T13:36:23.742+0000 I JOURNAL [journal writer] Journal writer thread started
mongo | 2017-04-21T13:36:23.871+0000 I NETWORK [initandlisten] waiting for connections on port 27017
web_server_1 | 9-1ubuntu4.21 Development Server started at Fri Apr 21 13:36:24 2017
web_server_1 | Listening on http://0.0.0.0:8000
web_server_1 | Document root is /var/www
web_server_1 | Press Ctrl-C to quit.
But when I want to access http://172.17.0.3:8000/my_api/, where 172.17.0.3 is container's IP, I recive the message in Postman:
docker-compose.yml file
web_server:
build: web_server/
ports:
- "8000:8000"
links:
- mongo
tty: true
environment:
SYMFONY__MONGO_ADDRESS: mongo
SYMFONY__MONGO_PORT: 27017
mongo:
image: mongo:3.0
container_name: mongo
command: mongod --smallfiles
expose:
- 27017
Result for command docker-compose ps
PS E:\myapi> docker-compose ps
Name Command State Ports
--------------------------------------------------------------------------------------------------
myapi_web_server_1 /bin/bash /entrypoint.sh Up 0.0.0.0:8000->8000/tcp
mongo docker-entrypoint.sh mongo ... Up 27017/tcp
And result for command docker inspect myapi_web_server_1
"NetworkSettings": {
"Bridge": "",
"SandboxID": "774a7dcbdbfbf7e437ddff340aedd4ce951dffa7a80deab9afb6e6a8abc70bde",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"8000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8000"
}
]
},
"SandboxKey": "/var/run/docker/netns/774a7dcbdbfb",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "4c96f6e6f8a2c80dd7ea7469dd9d74760be1af81a8039a4f835145b8f1ef5fb5",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.3",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:11:00:03",
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "e174576418903bf0809edd47b77d52e2fc7644d5aacafa15ec6a8f2d15458b8a",
"EndpointID": "4c96f6e6f8a2c80dd7ea7469dd9d74760be1af81a8039a4f835145b8f1ef5fb5",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.3",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:03"
}
}
}
When I try to call from http://127.0.0.1:8000/my_api I recive in Postman
And in console:
web_server_1 | [Fri Apr 21 13:51:08 2017] 172.17.0.1:33382 [404]: /my_api - No such file or directory
Dockerfile content is
FROM ubuntu:14.04
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get install -y \
git \
curl \
php5-cli \
php5-json \
php5-intl
RUN curl -sS https://getcomposer.org/installer | php
RUN mv composer.phar /usr/local/bin/composer
ADD entrypoint.sh /entrypoint.sh
ADD ./code /var/www
WORKDIR /var/www
#RUN chmod +x /entrypoint.sh
ENTRYPOINT [ "/bin/bash", "/entrypoint.sh" ]
List routes
PS E:\myapi\web_server\code> php bin/console debug:router
----------------------------------- -------- -------- ------ -----------------------------------
Name Method Scheme Host Path
----------------------------------- -------- -------- ------ -----------------------------------
_wdt ANY ANY ANY /_wdt/{token}
_profiler_home ANY ANY ANY /_profiler/
_profiler_search ANY ANY ANY /_profiler/search
_profiler_search_bar ANY ANY ANY /_profiler/search_bar
_profiler_info ANY ANY ANY /_profiler/info/{about}
_profiler_phpinfo ANY ANY ANY /_profiler/phpinfo
_profiler_search_results ANY ANY ANY /_profiler/{token}/search/results
_profiler_open_file ANY ANY ANY /_profiler/open
_profiler ANY ANY ANY /_profiler/{token}
_profiler_router ANY ANY ANY /_profiler/{token}/router
_profiler_exception ANY ANY ANY /_profiler/{token}/exception
_profiler_exception_css ANY ANY ANY /_profiler/{token}/exception.css
_twig_error_test ANY ANY ANY /_error/{code}.{_format}
db_transaction_postaddtransaction POST ANY ANY /my_api
db_transaction_gettransactions GET ANY ANY /my_api/
db_transaction_getbalance GET ANY ANY /balance/
homepage ANY ANY ANY /
----------------------------------- -------- -------- ------ -----------------------------------
Result using: list routes
{
"web_profiler.controller.profiler":[
"_wdt",
"_profiler_home",
"_profiler_search",
"_profiler_search_bar",
"_profiler_info",
"_profiler_phpinfo",
"_profiler_search_results",
"_profiler_open_file",
"_profiler"
],
"web_profiler.controller.router":[
"_profiler_router"
],
"web_profiler.controller.exception":[
"_profiler_exception",
"_profiler_exception_css"
],
"twig.controller.preview_error":[
"_twig_error_test"
],
"DBBundle\\Controller\\TransactionController":[
"db_transaction_postaddtransaction",
"db_transaction_gettransactions",
"db_transaction_getbalance"
],
"AppBundle\\Controller\\DefaultController":[
"homepage",
"route"
]
}
When I start server with command php bin\console server:run all routes works.
What is wrong and how I can access the API methods?
I used https://phpdocker.io/ for generate a files and works.