I'm trying to increase the max_wal_senders param and no matter how or what I set it to, it always shows up as 1.
Updated postgresql.conf, only one instanced of max_wal_senders and it's set to 10.
I've also used alter system set max_wal_senders = 10 and verified it's showing as 10 in the auto.conf.
I've restarted the DB multiple times, and updating other config changes like max_connections are showing as updated when looking at show max_connections so I know the config I'm updating is the correct one.
In running select * from pg_settings where name = 'max_wal_senders';
Current value is 1, boot_value is set to 10, and the reset_value is set to 1.
It seems like it's getting reset or the changes just aren't being applied for some reason, but not having the issue with any other parameter. Anything I'm missing?
Should also be noted that I'm running Postgres though docker and my method for restarting postgres is simply restarting the docker container. (again, this works for other config changes, so not sure if it matters)
{
"select * from pg_settings where name = 'max_wal_senders'": [
{
"name" : "max_wal_senders",
"setting" : "1",
"unit" : null,
"category" : "Replication \/ Sending Servers",
"short_desc" : "Sets the maximum number of simultaneously running WAL sender processes.",
"extra_desc" : null,
"context" : "postmaster",
"vartype" : "integer",
"source" : "command line",
"min_val" : "0",
"max_val" : "262143",
"enumvals" : null,
"boot_val" : "10",
"reset_val" : "1",
"sourcefile" : null,
"sourceline" : null,
"pending_restart" : false
}
]}
In checking my docker-compose.yml, in the postgres command I'm setting cmax_wal_senders=1.
postgres -cwal_level=archive -carchive_mode=on -carchive_command="/usr/bin/wget wale/wal-push/%f -O -" -carchive_timeout=600 -ccheckpoint_timeout=700 -cmax_wal_senders=1
I've updated this to 10 though and have restarted the container and am still seeing 1.
postgres -cwal_level=archive -carchive_mode=on -carchive_command="/usr/bin/wget wale/wal-push/%f -O -" -carchive_timeout=600 -ccheckpoint_timeout=700 -cmax_wal_senders=10
The explanation can be seen in the output from pg_settings: the source of the setting is "command line". That means that the server was started with that explicit parameter value, e.g.
postgres -c max_wal_senders=1 -D datadir
That will override the setting in the configuration files.
Related
I've the following script with a custom database specified but I don't see the database user getting created within the GUI (compass). I only see 3 default databases (admin, config, local).
I've looked into this linked answer but I need a specific answer for my question, please.
mongo:
image: mongo:4.0.10
container_name: mongo
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: mypass
MONGO_INITDB_DATABASE: mydb
ports:
- 27017:27017
- 27018:27018
- 27019:27019
The expectation for a user database to be created.
Database prefilled with some records.
Edit - made some progress, 2 Problems
Added volumes
mongo:
image: mongo:4.0.1r0
container_name: mongo
restart: always
volumes:
- ./assets:/docker-entrypoint-initdb.d/
1. Ignore
Within assets folder, I've 3 files and I see this in the logs, my files are getting ignored.
/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/file1.json
/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/file2.json
/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/file3.json
all my JSON files look like following. (no root array object? no [] at the root?)
{ "_id" : { "$oid" : "5d3a9d423b881e4ca04ae8f0" }, "name" : "Human Resource" }
{ "_id" : { "$oid" : "5d3a9d483b881e4ca04ae8f1" }, "name" : "Sales" }
2. Default Database not getting created. following line is not having any effect.
MONGO_INITDB_DATABASE: mydb
All files *.json extension will be ignored, it should in *.js. Look into the documentation of mongo DB docker hub
MONGO_INITDB_DATABASE
This variable allows you to specify the name of a database to be used
for creation scripts in /docker-entrypoint-initdb.d/*.js (see
Initializing a fresh instance below). MongoDB is fundamental
designed for "create on first use", so if you do not insert data with
your JavaScript files, then no database is created.
Initializing a fresh instance
When a container is started for the first time it will execute files
with extensions .sh and .js that are found in
/docker-entrypoint-initdb.d. Files will be executed in alphabetical
order. .js files will be executed by mongo using the database
specified by the MONGO_INITDB_DATABASE variable, if it is present, or
test otherwise. You may also switch databases within the .js script.
you can look into this example
create folder data and place create_article.js in it
( in the example I am passing your created DB user)
db = db.getSiblingDB("user");
db.article.drop();
db.article.save( {
title : "this is my title" ,
author : "bob" ,
posted : new Date(1079895594000) ,
pageViews : 5 ,
tags : [ "fun" , "good" , "fun" ] ,
comments : [
{ author :"joe" , text : "this is cool" } ,
{ author :"sam" , text : "this is bad" }
],
other : { foo : 5 }
});
db.article.save( {
title : "this is your title" ,
author : "dave" ,
posted : new Date(4121381470000) ,
pageViews : 7 ,
tags : [ "fun" , "nasty" ] ,
comments : [
{ author :"barbara" , text : "this is interesting" } ,
{ author :"jenny" , text : "i like to play pinball", votes: 10 }
],
other : { bar : 14 }
});
db.article.save( {
title : "this is some other title" ,
author : "jane" ,
posted : new Date(978239834000) ,
pageViews : 6 ,
tags : [ "nasty" , "filthy" ] ,
comments : [
{ author :"will" , text : "i don't like the color" } ,
{ author :"jenny" , text : "can i get that in green?" }
],
other : { bar : 14 }
});
mount the data directory
docker run --rm -it --name some-mongo -v /home/data/:/docker-entrypoint-initdb.d/ -e MONGO_INITDB_DATABASE=user -e MONGO_INITDB_ROOT_USERNAME=root -e MONGO_INITDB_ROOT_PASSWORD=mypass mongo:4.0.10
once container created you will be able to see the DBs,
can anyone give me some directions / examples about how to import about 100 million rows from SQL server to Elasticsearch using c# language?
Currently I'm using a NEST client in c# but is very slow ( 5k - 10k / Minute ), the slowness looks like is more from the app side than ES.
Appreciate any help.
You can use IndexMany but if you want to index only one table I think you can try with JDBC plugin. After installation, you can simply execute a .bat script to index your table.
#echo off
set DIR=%~dp0
set LIB=%DIR%..\lib\*
set BIN=%DIR%..\bin
REM ???
echo {^
"type" : "jdbc",^
"jdbc" : {^
"url" : "jdbc:sqlserver://localhost:25488;instanceName=SQLEXPRESS;databaseName=AdventureWorks2014",^
"user" : "hintdesk",^
"password" : "123456",^
"sql" : "SELECT BusinessEntityID as _id, BusinessEntityID, Title, FirstName, MiddleName, LastName FROM Person.Person",^
"treat_binary_as_string" : true,^
"elasticsearch" : {^
"cluster" : "elasticsearch",^
"host" : "localhost",^
"port" : 9200^
},^
"index" : "person",^
"type" : "person"^
}^
}^ | "%JAVA_HOME%\bin\java" -cp "%LIB%" -Dlog4j.configurationFile="%BIN%\log4j2.xml" "org.xbib.tools.Runner" "org.xbib.tools.JDBCImporter"
When using https://github.com/jprante/elasticsearch-river-jdbc I notice that the following curl statement successfully indexes data the first time. However, the river fails to repeatedly poll the database for updates.
To restate, when I run the following, the river successfully connects to MySQL, runs the query successfully, indexes the results, but never runs the query again.
curl -XPUT '127.0.0.1:9200/_river/projects_river/_meta' -d '{
"type" : "jdbc",
"index" : {
"index" : "test_projects",
"type" : "project",
"bulk_size" : 100,
"max_bulk_requests" : 1,
"autocommit": true
},
"jdbc" : {
"driver" : "com.mysql.jdbc.Driver",
"poll" : "1m",
"strategy" : "simple",
"url" : "jdbc:mysql://localhost:3306/test",
"user" : "root",
"sql" : "SELECT name, updated_at from projects p where p.updated_at > date_sub(now(),interval 1 minute)"
}
}'
Tailing the log, I see:
[2013-09-27 16:32:24,482][INFO ][org.elasticsearch.river.jdbc.strategy.simple.SimpleRiverFlow] next run, waiting 1m
[2013-09-27 16:33:24,488][INFO ]> [org.elasticsearch.river.jdbc.strategy.simple.SimpleRiverFlow] next run, waiting 1m
[2013-09-27 16:34:24,494][INFO ]> [org.elasticsearch.river.jdbc.strategy.simple.SimpleRiverFlow] next run, waiting 1m
But the index stays empty. Running on a macbook pro with elasticsearch version stable 0.90.2, HEAD and mysql-connector-java-5.1.25-bin.jar in the river pligns directory.
I think if you switch your strategy value from "simple" to "poll" you may get what you are looking for - it has worked for me with jdbc on that version of elasticsearch against MS SQL.
Also - you will need to select a field as _id (select primarykey as _id) as this is used in the elasticsearch river for determining what records are added/deleted/updated.
I am trying to get MongoDB running on my localhost (Windows) with authentication.
To do so, I first have to add a user, right?
I did so by starting the daemon using this command:
C:\[…]\mongod.exe -f C:\[…]\mongo.config
mongo.config contains the following:
# Basic database configuration
dbpath = C:\[…]\db\
bind_ip = 127.0.0.1
port = 20571
# Security
noauth = true
# Administration & Monitoring
nohttpinterface = true
After that I connected via this command:
C:\[…]\mongo.exe --port 20571 127.0.0.1
There I added a user:
> use admin
switched to db admin
> db.addUser('test', 'test')
{ "n" : 0, "connectionId" : 1, "err" : null, "ok" : 1 }
{
"user" : "test",
"readOnly" : false,
"pwd" : "a6de521abefc2fed4f5876855a3484f5",
"_id" : ObjectId("50db155e157524b3d2195278")
}
To check if everything worked I did the following:
> db.system.users.find()
{ "_id" : ObjectId("50db155e157524b3d2195278"), "user" : "test", "readOnly" : false, "pwd" : "a6de521abefc2fed4f5876855a3484f5" }
Which seemed OK to me.
After that I changed "noauth = true" to "auth = true" in the mongo.config file and restarted the daemon.
Now I expected to be able to connect with user and password:
C:\[…]\mongo.exe --port 20571 -u test -p test 127.0.0.1
Which denied access to me with this message:
MongoDB shell version: 2.0.4
connecting to: 127.0.0.1:20571/127.0.0.1
Wed Dec 26 16:24:36 uncaught exception: error { "$err" : "bad or malformed command request?", "code" : 13530 }
exception: login failed
So here's my question: Why does the login fail?
I can still connect without providing user and password, but can't access any data because "unauthorized db:admin lock type:-1 client:127.0.0.1". Which is actually what I expected.
As Andrei Sfat told me in the comments on the question I made 2 major errors.
First, I thought I could pass the IP to the Client as a simple argument. But you have to use --host for that.
Instead, the parameter I thought was the IP address actually should be the db name.
So the correct command to connect to a Server is as follows:
C:\[…]\mongo.exe --port 20571 -u test -p test --host 127.0.0.1 admin
Second, users are per database. As I only added the user "test" to the db "admin", it only works there.
Obviously the auth = true configuration wasn't load successfully. Did you forget the -f paramter when you restart the mongod.exe?
C:\[…]\mongod.exe -f C:\[…]\mongo.config
I am running updates against a database in MongoLab (Heroku) and cannot get information from getLastError.
As an example, below are statements to update a collection in a MongoDB database running locally in my machine (db version v2.0.3-rc1).
ariels-MacBook:mongodb ariel$ mongo
MongoDB shell version: 2.0.3-rc1
connecting to: test
> db.mycoll.insert({'key': '1','data': 'somevalue'});
> db.mycoll.find();
{ "_id" : ObjectId("505bcc5783cdc9e90ffcddd8"), "key" : "1", "data" : "somevalue" }
> db.mycoll.update({'key': '1'},{$set: {'data': 'anothervalue'}});
> db.runCommand('getlasterror');
{
"updatedExisting" : true,
"n" : 1,
"connectionId" : 4,
"err" : null,
"ok" : 1
}
>
All is well locally.
Now I switch to a database in MongoLab and run the same statements to update a document. getLastError is not returning an updatedExisting field. Hence, I am unable to test if my update was successful or otherwise.
ariels-MacBook:mongodb ariel$ mongo ds0000000.mongolab.com:00000/heroku_app00000 -u someuser -p somepassword
MongoDB shell version: 2.0.3-rc1
connecting to: ds000000.mongolab.com:00000/heroku_app00000
> db.mycoll.insert({'key': '1','data': 'somevalue'});
> db.mycoll.find();
{ "_id" : ObjectId("505bcf9b2421140a6b8490dd"), "key" : "1", "data" : "somevalue" }
> db.mycoll.update({'key': '1'},{$set: {'data': 'anothervalue'}});
> db.runCommand('getlasterror');
{
"n" : 0,
"lastOp" : NumberLong("5790450143685771265"),
"connectionId" : 1097505,
"err" : null,
"ok" : 1
}
> db.mycoll.find();
{ "_id" : ObjectId("505bcf9b2421140a6b8490dd"), "data" : "anothervalue", "key" : "1" }
>
Did anyone run into this?
If it matters, my resource at MongoLab is running mongod v2.0.7 (my shell is 2.0.3).
Not exactly sure what I am missing.
I am waiting to hear from their support (I will post here when I hear back) but wanted to check with you fine folks here as well just in case.
Thank you.
This looks to be a limitation of not having admin privileges to the mongod process. You might file a ticket with 10gen as it doesn't seem like a necessary limitation.
When I run Mongo in auth mode on my laptop I need to authenticate as a user in the admin database in order to see an "n" other than 0 or the "updatedExisting" field. When I authenticate as a user in any other database I get similar results to what you're seeing in MongoLab production.
(Full disclosure: I work for MongoLab. As a side note, I don't see the support ticket you mention in our system. We'd be happy to work with you directly if you'd like. You can reach us at support#mongolab.com or http://support.mongolab.com)