Laravel 6 collections - Extract a single value from a record - eloquent

I have the following collection:
$products = $this->productRepository->getByCompanyId(1)->get();
Illuminate\Database\Eloquent\Collection {#856 ▼
#items: array:1 [▼
0 => App\OrderItemProduct {#824 ▼
#table: "product"
...
#attributes: array:3 [▼
"available" => "5"
"product_name" => "Product X"
"product_id" => 1
]
....
}
]
}
I want to get the available value for a single record. I try the following which works if a record exists:
$available = $products->where('product_id','=', 1)->first()['available'];
However it throws an exception Trying to access array offset on value of type null if I try to retrieve the available value of a record that doesn't exist in the collection e.g trying to get for product_id 17:
$available = $products->where('product_id','=', 17)->first()['available'];
This was working in laravel v6 and earlier but no longer seems to work in laravel 6.17.1
Note in laravel 5.5 I was able to use the value method as follows but then that started throwing same errors as above in v 5.7 so I switched to the above method but now that no loger works either in the latest laravel version.
`available = $products->where('product_id','=', 17)->value('available');
Also when i try get why does it always return null even where record exists?
`available = $products->where('product_id','=', 1)->get('available');
Any ideas how to fix?

Related

SugarCRM Rest API set_relationship between Contacts and Documents

I am trying to (link/set_relationship) between a document and a contact on SugarCRM. I am not sure how to construct the "name_value_list" specifically for this. At least that is what I believe to be wrong.
I have tried the following:
1.
'name_value_list': []
2.
'name_value_list' : [{
'name': "documents_contacts",
'value': 'Other',
}],
3.
'name_value_list': [{'table': "%s_%s" % (ModuleName, LinkedModuleName)},
{'fields': [
{"id": str(uuid.uuid1())},
{"date_modified": str(datetime.datetime.now())},
{"deleted": '0'},
{"document_id": RecordID},
{"contact_id": LinkedRecordID},
]
4.
'name_value_list':[{"%s_%s" % (ModuleName, LinkedModuleName): 'Other',
"id": str(uuid.uuid1()),
"date_modified": str(datetime.datetime.now()),
"deleted": '0',
"document_id": RecordID,
"contact_id": LinkedRecordID
}]
SugarCRM CE Version 6.5.20 (Build 1001)
SugarCRM v4_1 Rest API Documentation:
* Set a single relationship between two beans. The items are related by module name and id.
*
* #param String $session -- Session ID returned by a previous call to login.
* #param String $module_name -- name of the module that the primary record is from. This name should be the name the module was developed under (changing a tab name is studio does not affect the name that should be passed into this method)..
* #param String $module_id - The ID of the bean in the specified module_name
* #param String link_field_name -- name of the link field which relates to the other module for which the relationship needs to be generated.
* #param array related_ids -- array of related record ids for which relationships needs to be generated
* #param array $name_value_list -- The keys of the array are the SugarBean attributes, the values of the array are the values the attributes should have.
* #param integer $delete -- Optional, if the value 0 or nothing is passed then it will add the relationship for related_ids and if 1 is passed, it will delete this relationship for related_ids
* #return Array - created - integer - How many relationships has been created
* - failed - integer - How many relationsip creation failed
* - deleted - integer - How many relationships were deleted
* #exception 'SoapFault' -- The SOAP error, if any
*/
Method [ public method set_relationship ] {
- Parameters [7] {
Parameter #0 [ $session ]
Parameter #1 [ $module_name ]
Parameter #2 [ $module_id ]
Parameter #3 [ $link_field_name ]
Parameter #4 [ $related_ids ]
Parameter #5 [ $name_value_list ]
Parameter #6 [ $delete ]
}
}
Python 3.7
def SetRelationship(self, ModuleName, ModuleID, LinkFieldName, RelatedID):
method = 'set_relationship'
data = {
'session':self.SessionID,
'module_name':ModuleName,
'module_id':ModuleID,
'link_field_name':LinkFieldName,
'related_ids':[RelatedID, ]
}
response = json.loads(self.request(method, data))
SetRelationship('Documents', 'e9d22076-02fe-d95d-1abb-5d572e65dd46', 'Contacts', '2cdc28d8-763e-6232-2788-57f4e19a9ea0')
Result:
{'created': 0, 'failed': 1, 'deleted': 0}
Expected Result:
{'created': 1, 'failed': 0, 'deleted': 0}
You probably meant to call
SetRelationship('Documents', 'e9d22076-02fe-d95d-1abb-5d572e65dd46', 'contacts', '2cdc28d8-763e-6232-2788-57f4e19a9ea0')
Notice the lowercase contacts here, as the API expects the name of the link field in Documents, not the name of the module.
If that still doesn't fix the issue, check the sugarcrm.log and the php log for errors.

Importing nested JSON documents into Elasticsearch and making them searchable

We have MongoDB-collection which we want to import to Elasticsearch (for now as a one-off effort). For this end, we have exported the collection with monogexport. It is a huge JSON file with entries like the following:
{
"RefData" : {
"DebtInstrmAttrbts" : {
"NmnlValPerUnit" : "2000",
"IntrstRate" : {
"Fxd" : "3.1415"
},
"MtrtyDt" : "2020-01-01",
"TtlIssdNmnlAmt" : "200000000",
"DebtSnrty" : "SNDB"
},
"TradgVnRltdAttrbts" : {
"IssrReq" : "false",
"Id" : "BMTF",
"FrstTradDt" : "2019-04-01T12:34:56.789"
},
"TechAttrbts" : {
"PblctnPrd" : {
"FrDt" : "2019-04-04"
},
"RlvntCmptntAuthrty" : "GB"
},
"FinInstrmGnlAttrbts" : {
"ClssfctnTp" : "DBFNXX",
"ShrtNm" : "AVGO 3.625 10/16/24 c24 (URegS)",
"FullNm" : "AVGO 3 5/8 10/15/24 BOND",
"NtnlCcy" : "USD",
"Id" : "USU1109MAXXX",
"CmmdtyDerivInd" : "false"
},
"Issr" : "549300WV6GIDOZJTVXXX"
}
We are using the following Logstash configuration file to import this data set into Elasticsearch:
input {
file {
path => "/home/elastic/FIRDS.json"
start_position => "beginning"
sincedb_path => "/dev/null"
codec => json
}
}
filter {
mutate {
remove_field => [ "_id", "path", "host" ]
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "firds"
}
}
All this works fine, the data ends up in the index firds of Elasticsearch, and a GET /firds/_search returns all the entries within the _source field.
We understand that this field is not indexed and thus is not searchable, which we are actually after. We want make all of the entries within the original nested JSON searchable in Elasticsearch.
We assume that we have to adjust the filter {} part of our Logstash configuration, but how? For consistency reasons, it would not be bad to keep the original nested JSON structure, but that is not a must. Flattening would also be an option, so that e.g.
"RefData" : {
"DebtInstrmAttrbts" : {
"NmnlValPerUnit" : "2000" ...
becomes a single key-value pair "RefData.DebtInstrmAttrbts.NmnlValPerUnit" : "2000".
It would be great if we could do that immediately with Logstash, without using an additional Python script operating on the JSON file we exported from MongoDB.
EDIT: Workaround
Our current work-around is to (1) dump the MongoDB database to dump.json and then (2) flatten it with jq using the following expression, and finally (3) manually import it into Elastic
ad (2): This is the flattening step:
jq '. as $in | reduce leaf_paths as $path ({}; . + { ($path | join(".")): $in | getpath($path) }) | del(."_id.$oid") '
-c dump.json > flattened.json
References
Walker Rowe: ElasticSearch Nested Queries: How to Search for Embedded Documents
ElasticSearch search in document and in dynamic nested document
Mapping for Nested JSON document in Elasticsearch
Logstash - import nested JSON into Elasticsearch
Remark for the curious: The shown JSON is a (modified) entry from the Financial Instruments Reference Database System (FIRDS), available from the European Securities and Markets Authority (ESMA) who is an European financial regulatory agency overseeing the capital markets.

logstash-input-mongodb loop on a "restarting error" - Timestamp

I try to use the mongodb plugin as input for logstash.
Here is my simple configuration:
input {
mongodb {
uri => 'mongodb://localhost:27017/testDB'
placeholder_db_dir => '/Users/TEST/Documents/WORK/ELK_Stack/LogStash/data/'
collection => 'logCollection_ALL'
batch_size => 50
}
}
filter {}
output { stdout {} }
But I'm facing a "loop issue" probably due to a field "timestamp" but I don't know what to do.
[2018-04-25T12:01:35,998][WARN ][logstash.inputs.mongodb ] MongoDB Input threw an exception, restarting {:exception=>#TypeError: wrong argument type String (expected LogStash::Timestamp)>}
With also a DEBUG log:
[2018-04-25T12:01:34.893000 #2900] DEBUG -- : MONGODB | QUERY | namespace=testDB.logCollection_ALL selector={:_id=>{:$gt=>BSON::ObjectId('5ae04f5917e7979b0a000001')}} flags=[:slave_ok] limit=50 skip=0 project=nil |
runtime: 39.0000ms
How can I parametrize my logstash config to get my output in the stdout console ?
It's because of field #timestamp that has ISODate data type.
You must remove this field from all documents.
db.getCollection('collection1').update({}, {$unset: {"#timestamp": 1}}, {multi: true})

Laravel mongoDB groupBy - ERROR: Unrecognized expression '$last'

My below code is producing an error. If the groupBy is removed it works fine. But I only need to get distinct values for common_id. How can i solve this issue?
MasterAffiliateProductMappingMongo::select('_id', 'our_product_id')
->where('top_deal', '=', 'true')
->orderBy('srp', 'asc')
->groupBy('common_id')
->get();
Error: [MongoDB\Driver\Exception\RuntimeException] Unrecognized
expression '$last'
MongoDocument Example:
{
"_id" : ObjectId("5911af8209ed4456d069b1d1"),
"product_id" : "MOBDRYWXFKNPZVG6",
"our_product_id" : "5948e0dca6bc725adb35af2e",
"mrp" : 0.0,
"srp" : 500.0,
"ID" : "5911af8209ed4456d069b1d1",
"common_id" : ObjectId("5911af8209ed4456d069b1d1"),
"top_deal" : "true"
}
Error Log:
[2017-06-28 12:19:46] lumen.ERROR: exception
'MongoDB\Driver\Exception\RuntimeException' with message 'Unrecognized
expression '$last'' in
/var/www/html/PaymetryService4/vendor/mongodb/mongodb/src/Operation/Aggregate.php:219
Refer to this link https://github.com/jenssegers/laravel-mongodb/issues/1185#issuecomment-321267144 it works. We should remove '_id' from the select field.

elasticsearch jdbc river polling--- load data from mysql repeatedly

When using https://github.com/jprante/elasticsearch-river-jdbc I notice that the following curl statement successfully indexes data the first time. However, the river fails to repeatedly poll the database for updates.
To restate, when I run the following, the river successfully connects to MySQL, runs the query successfully, indexes the results, but never runs the query again.
curl -XPUT '127.0.0.1:9200/_river/projects_river/_meta' -d '{
"type" : "jdbc",
"index" : {
"index" : "test_projects",
"type" : "project",
"bulk_size" : 100,
"max_bulk_requests" : 1,
"autocommit": true
},
"jdbc" : {
"driver" : "com.mysql.jdbc.Driver",
"poll" : "1m",
"strategy" : "simple",
"url" : "jdbc:mysql://localhost:3306/test",
"user" : "root",
"sql" : "SELECT name, updated_at from projects p where p.updated_at > date_sub(now(),interval 1 minute)"
}
}'
Tailing the log, I see:
[2013-09-27 16:32:24,482][INFO ][org.elasticsearch.river.jdbc.strategy.simple.SimpleRiverFlow] next run, waiting 1m
[2013-09-27 16:33:24,488][INFO ]> [org.elasticsearch.river.jdbc.strategy.simple.SimpleRiverFlow] next run, waiting 1m
[2013-09-27 16:34:24,494][INFO ]> [org.elasticsearch.river.jdbc.strategy.simple.SimpleRiverFlow] next run, waiting 1m
But the index stays empty. Running on a macbook pro with elasticsearch version stable 0.90.2, HEAD and mysql-connector-java-5.1.25-bin.jar in the river pligns directory.
I think if you switch your strategy value from "simple" to "poll" you may get what you are looking for - it has worked for me with jdbc on that version of elasticsearch against MS SQL.
Also - you will need to select a field as _id (select primarykey as _id) as this is used in the elasticsearch river for determining what records are added/deleted/updated.