I'm trying to visualize the mongodb data in kibana using logstash configuration.Below is my configuration.I'm getting some outputs in terminal and it is looping forever. I couldn't see any index created by the name mentioned in the config file and if the index was generated also don't have any data on it. Saying no results to match in the discover tab.How to make the configuration to visualize the data in kibana?
input {
mongodb {
uri => "mongodb+srv:###############?retryWrites=true&w=majority"
placeholder_db_dir => "C:/logstash-mongodb"
placeholder_db_name => "logstash1_sqlite.db"
collection => "logs"
batch_size => 1
}
}
filter {
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
action => "index"
index => "ayesha_logs"
hosts => ["localhost:9200"]
}
}
http://localhost:9200/ayesha_logs/_search?pretty
Terminal logs:
D, [2020-10-01T08:11:45.717000 #2372] DEBUG -- : MONGODB | range-api-test-cluster-shard-00-02.icqif.azure.mongodb.net:27017 req:259 conn:1:1 sconn:231839 | coexistence-poc.listCollections | STARTED | {"listCollections"=>1, "cursor"=>{}, "nameOnly"=>true, "$db"=>"coexistence-poc", "$clusterTime"=>{"clusterTime"=>#<BSON::Timestamp:0x32598cb2 #increment=1, #seconds=1601532700>, "signature"=>{"hash"=><BSON::Binary:0x2622 type=generic data=0xfaf25a8d85...
D, [2020-10-01T08:11:45.755000 #2372] DEBUG -- : MONGODB | range-api-test-cluster-shard-00-02.icqif.azure.mongodb.net:27017 req:259 | coexistence-poc.listCollections | SUCCEEDED | 0.038s
D, [2020-10-01T08:11:50.801000 #2372] DEBUG -- : MONGODB | range-api-test-cluster-shard-00-02.icqif.azure.mongodb.net:27017 req:260 conn:1:1 sconn:231839 | coexistence-poc.find | STARTED | {"find"=>"coexistence-pinfobackfill-logs", "filter"=>{"_id"=>{"$gt"=>BSON::ObjectId('5f71f009b6b9115861d379d8')}}, "limit"=>50, "$db"=>"coexistence-poc", "$clusterTime"=>{"clusterTime"=>#<BSON::Timestamp:0x32598cb2 #increment=1, #seconds=1601532700>, ...
D, [2020-10-01T08:11:50.843000 #2372] DEBUG -- : MONGODB | range-api-test-cluster-shard-00-02.icqif.azure.mongodb.net:27017 req:260 | coexistence-poc.find | SUCCEEDED | 0.042s
D, [2020-10-01T08:11:50.859000 #2372] DEBUG -- : MONGODB | range-api-test-cluster-shard-00-02.icqif.azure.mongodb.net:27017 req:261 conn:1:1 sconn:231839 | coexistence-poc.listCollections | STARTED | {"listCollections"=>1, "cursor"=>{}, "nameOnly"=>true, "$db"=>"coexistence-poc", "$clusterTime"=>{"clusterTime"=>#<BSON::Timestamp:0x32598cb2 #increment=1, #seconds=1601532700>, "signature"=>{"hash"=><BSON::Binary:0x2622 type=generic data=0xfaf25a8d85...
D, [2020-10-01T08:11:50.906000 #2372] DEBUG -- : MONGODB | range-api-test-cluster-shard-00-02.icqif.azure.mongodb.net:27017 req:261 | coexistence-poc.listCollections | SUCCEEDED | 0.047s
Did you create your Kibana's index pattern ?
If not, just go to Menu > stack managment > Kibana > Index pattern
click on
And follow the steps.
You will then be able to use you index in Discover or visualization tabs.
Related
I've set-up my environment using docker based on this guide.
On kafka-console-producer I will send this line:
Hazriq|27|Undegrad|UNITEN
I want this data to be ingested to Kusto like this:
+--------+-----+----------------+------------+
| Name | Age | EducationLevel | University |
+--------+-----+----------------+------------+
| Hazriq | 27 | Undegrad | UNITEN |
+--------+-----+----------------+------------+
Can this be handled by Kusto using the mapping (which I'm still trying to understand) or this should be catered by Kafka?
Tried #daniel suggestion:
.create table ParsedTable (name: string, age: int, educationLevel: string, univ:string)
.create table ParsedTable ingestion csv mapping 'ParsedTableMapping' '[{ "Name" : "name", "Ordinal" : 0},{ "Name" : "age", "Ordinal" : 1 },{ "Name" : "educationLevel", "Ordinal" : 2},{ "Name" : "univ", "Ordinal" : 3}]'
kusto.tables.topics_mapping=[{'topic': 'kafkatopiclugiaparser','db': 'kusto-test', 'table': 'ParsedTable','format': 'psv', 'mapping':'ParsedTableMapping'}]
value.converter=org.apache.kafka.connect.storage.StringConverter
key.converter=org.apache.kafka.connect.storage.StringConverter
but getting this instead:
+----------------------------+-----+----------------+------+
| Name | Age | EducationLevel | Univ |
+----------------------------+-----+----------------+------+
| Hazriq|27|Undergrad|UNITEN | | | |
+----------------------------+-----+----------------+------+
Currently, the connector passes the data as it comes (no manipulation on it on the client side), and any parsing is left to Kusto.
As such, psv format is supported by kusto, and it should be possible by setting the format to psv and providing a mapping reference.
When adding the plugin as described, you should be able to set it up like:
kusto.tables.topics_mapping=[{'topic': 'testing1','db': 'testDB', 'table': 'KafkaTest','format': 'psv', 'mapping':'KafkaMapping'}]
The mapping can be defined in Kusto as described in the Kusto docs defined like so
ingestion of data as you've shown using the psv format is supported (see below) - it's probably just a matter of debugging why your client-side invocation of the underlying commands aren't yielding the expected result. if you could share the full flow and code, including parameters, it may be helpful.
.create table ParsedTable (name: string, age: int, educationLevel: string, univ:string)
.ingest inline into table ParsedTable with(format=psv) <| Hazriq|27|Undegrad|UNITEN
ParsedTable:
| name | age | educationLevel | univ |
|--------|-----|----------------|--------|
| Hazriq | 27 | Undegrad | UNITEN |
I am a beginner in Laravel and I'm trying out the firstOrNew Eloquent method, as described in the documentation.
I have the following table in MySQL:
SELECT * FROM `ratings`;
+----+-----------+-----------+--------+---------------------+---------------------+
| id | realty_id | client_id | rating | created_at | updated_at |
+----+-----------+-----------+--------+---------------------+---------------------+
| 1 | 67 | 29548 | -1 | 2019-03-23 12:14:32 | 2019-03-23 12:26:57 |
| 2 | 67 | 29549 | 1 | 2019-03-23 12:14:53 | 2019-03-23 12:14:53 |
| 3 | 67 | 29547 | 1 | 2019-03-23 12:20:47 | 2019-03-23 12:20:47 |
| 4 | 67 | 29546 | 1 | 2019-03-23 12:24:47 | 2019-03-23 12:26:52 |
+----+-----------+-----------+--------+---------------------+---------------------+
4 rows in set (0,00 sec)
In Laravel, I have my Rating model setup as follows:
<?php
namespace App\Models;
use App\Models\BaseModel;
class Rating extends BaseModel
{
protected $table = 'ratings';
protected $fillable = [
'client_id',
'realty_id',
'rating',
];
public function realty()
{
return $this->belongsTo('App\Models\Realty');
}
public function client()
{
return $this->belongsTo('App\Models\Client');
}
}
When I try to retrieve an existing record by passing existing client_id and realty_id, I get the expected result:
$rating = Rating::firstOrNew([
'client_id' => 29548,
'realty_id' => 67,
]);
dd([$rating->client_id, $rating->realty_id]);
/*
Results in
array:2 [▼
0 => 29548
1 => 67
]
*/
However, when I try the same code with a non-existing client_id (say it's his first time submitting a rating), I get null for both properties:
$rating = Rating::firstOrNew([
'client_id' => 29550, // THIS ID IS NOT IN THE `ratings` TABLE
'realty_id' => 67,
]);
dd([$rating->client_id, $rating->realty_id]);
/*
Results in
array:2 [▼
0 => null
1 => null
]
*/
If I try to use firstOrCreate, instead of firstOrNew, I get an Internal Server Error with the following message:
SQLSTATE[23000]: Integrity constraint violation: 1452 Cannot add or update a child row: a foreign key constraint fails (aptoadmi_aptovc.ratings, CONSTRAINT realty_ratings_fk FOREIGN KEY (realty_id) REFERENCES realties (id) ON DELETE CASCADE ON UPDATE CASCADE) (SQL: insert into ratings (updated_at, created_at) values (2019-04-09 17:44:09, 2019-04-09 17:44:09))
Which is expected, since the values for client_id and realty_id are not being passed on to the INSERT statement.
What am I doing wrong?
I have celery setup and working together with django. I have some periodic tasks that run. The celery log shows that the tasks are executed and that they return something.
[2017-03-26 14:34:27,039: INFO/MainProcess] Received task: my_webapp.apps.events.tasks.clean_outdated[87994396-04f7-452b-a964-f6bdd07785e0]
[2017-03-26 14:34:28,328: INFO/PoolWorker-1] Task my_webapp.apps.events.tasks.clean_outdated[87994396-04f7-452b-a964-f6bdd07785e0] succeeded in 0.05246314400005758s: 'Removed 56 event(s)
| Removed 4 SGW(s)
'
But the result is not showing up on django-celery-results admin page.
These are my settings:
CELERY_BROKER_URL = os.environ.get('BROKER_URL')
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'Europe/Stockholm'
CELERY_RESULT_BACKEND = 'django-cache'
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
CELERY_RESULT_DB_SHORT_LIVED_SESSIONS = True # Fix for low traffic sites like this one
I have also tried setting CELERY_RESULT_BACKEND = 'django-db'. I know the migrations are made (when using those settings), the table exists in the database:
my_webapp=> \dt
List of relations
Schema | Name | Type | Owner
--------+--------------------------------------+-------+----------------
...
public | django_celery_beat_crontabschedule | table | my_webapp
public | django_celery_beat_intervalschedule | table | my_webapp
public | django_celery_beat_periodictask | table | my_webapp
public | django_celery_beat_periodictasks | table | my_webapp
public | django_celery_results_taskresult | table | my_webapp
...
(26 rows)
Google won't give me much help, most answers is about old libraries like djcelery. Any idea how the get the results in the table?
I am new to database indexing. My application has the following "find" and "update" queries, searched by single and multiple fields
reference | timestamp | phone | username | key | Address
update x | | | | |
findOne | x | x | | |
find/limit:16 | x | x | x | |
find/limit:11 | x | | | x | x
find/limit:1/sort:-1 | x | x | | x | x
find | x | | | |
1)update({"reference":"f0d3dba-278de4a-79a6cb-1284a5a85cde"}, ……….
2)findOne({"timestamp":"1466595571", "phone":"9112345678900"})
3)find({"timestamp":"1466595571", "phone":"9112345678900", "username":"a0001a"}).limit(16)
4)find({"timestamp":"1466595571", "key":"443447644g5fff", "address":"abc road, mumbai, india"}).limit(11)
5)find({"timestamp":"1466595571", "phone":"9112345678900", "key":"443447644g5fff", "address":"abc road, mumbai, india"}).sort({"_id":-1}).limit(1)
6)find({"timestamp":"1466595571"})
I am creating index
db.coll.createIndex( { "reference": 1 } ) //for 1st, 6th query
db.coll.createIndex( { "timestamp": 1, "phone": 1, "username": 1 } ) //for 2nd, 3rd query
db.coll.createIndex( { "timestamp": 1, "key": 1, "address": 1, phone: 1 } ) //for 4th, 5th query
Is this the correct way?
Please help me
Thank you
I think what you have done looks fine. One way to check if your query is using an index, which index is being used, and whether the index is effective is to use the explain() function alongside your find().
For example:
db.coll.find({"timestamp":"1466595571"}).explain()
will return a json document which details what index (if any) was used. In addition to this you can specify that the explain return "executionStats"
eg.
db.coll.find({"timestamp":"1466595571"}).explain("executionStats")
This will tell you how many index keys were examined to find the result set as well as the execution time and other useful metrics.
I have a posts table like so:
+-----+----------+------------+------------+
| id | topic_id | text | timestamp |
+-----+----------+------------+------------+
| 789 | 2 | foobar | 1396026357 |
| 790 | 2 | foobar | 1396026358 |
| 791 | 2 | foobar | 1396026359 |
| 792 | 3 | foobar | 1396026360 |
| 793 | 3 | foobar | 1396026361 |
+-----+----------+------------+------------+
How would I could about "grouping" the results by topic id, while pulling the most recent record (sorting by timestamp desc)?
I've come to the understanding that I might not want "group_by" but rather "distinct on". My postgres query looks like this:
select distinct on (topic_id) topic_id, id, text, timestamp
from posts
order by topic_id desc, timestamp desc;
This works great. However, I can't figure out if this is something I can do in DBIx::Class without having to write a custom ResultSource::View. I've tried various arrangements of group_by with selects and columns, and have tried distinct => 1. If/when a result is returned, it doesn't actually preserve the uniqueness.
Is there a way to write the query I am trying through a resultset search, or is there perhaps a better way to achieve the same result through a different type of query?
Check out the section in the DBIC Cookbook on grouping results.
I believe what you want is something along the lines of this though:
my $rs = $base_posts_rs->search(undef, {
columns => [ {topic_id=>"topic_id"}, {text=>"text"}, {timestamp=>"timestamp"} ],
group_by => ["topic_id"],
order_by => [ {-desc=>"topic_id"}, {-desc=>"timestamp"} ],
})
Edit: A quick and dirty way to get around strict SQL grouping would be something like this:
my $rs = $base_posts_rs->search(undef, {
columns => [
{ topic_id => \"MAX(topic_id)" },
{ text => \"MAX(text)" },
{ timestamp => \"MAX(timestamp)" },
],
group_by => ["topic_id"],
order_by => [ {-desc=>"topic_id"}, {-desc=>"timestamp"} ],
})
Of course, use the appropriate aggregate function for your need.