LinkedIn API returns empty shares / posts and for adAnalyticsV2 nothing and empty for shares - linkedin-api

for my request:
https://api.linkedin.com/v2/adAnalyticsV2?q=analytics&dateRange=(start:(day:1,month:9,year:2020),end:(day:1,month:9,year:2021))&timeGranularity=DAILY&pivot=SHARE&fields=externalWebsiteConversions,dateRange,impressions,landingPageClicks,likes,shares,costInLocalCurrency,pivot,pivotValue&companies=List(urn%3Ali%3Aorganization%3A<ORGANIZATION_ID>)
I get always empty return. What could it be? I do have all the required
Permissions:
r_1st_connections_size, r_ads_reporting, r_basicprofile, r_emailaddress, r_liteprofile, r_organization_social, rw_ads, rw_organization_admin, w_member_social, w_organization_social
My goal is to get metrics as likes, impression for all my / organizations posts.
I appreaciate any input, thanks a lot.
As return I get:
{
"paging": {
"start": 0,
"count": 10,
"links": []
},
"elements": []
}
Where for my shares:
https://api.linkedin.com/v2/shares?q=owners&owners=urn:li:company:<MY_ORGANIZATION_ID>
I get an empty paging but with the right total count:
{
"paging": {
"start": 0,
"count": 10,
"links": [
{
"type": "application/json",
"rel": "next",
"href": "/v2/shares?count=10&owners=urn%3Ali%3Acompany%3A<MY_ORGANIZATION_ID>&q=owners&start=0"
}
],
"total": 569
},
"elements": []
}
Of course I use the generated bearer token for the request for which I have the listed permissions when I check in the https://www.linkedin.com/developers/tools/oauth/token-inspector.
I'm looking forward to any input and help. Thanks a lot.

Its because you may not have the admin access on the company page, you'll likely need to obtain a proper API key and secret to interact with the page via the Linkedin API.

I think there is a problem with the API and how it filters the data. I ran into the same issue and just kept trying.
I came to the conclusion that you have to ignore that the documentation tells you that you reached the end of the data if there are less results in the response than you requested. Also ignore the pagination info given in the request.
In your example you use the defaults for sharesPerOwner and count. You may want to include at least sharesPerOwner, because that defaults to 1. I worked with count=5 and sharesPerOwner=3000 (that's the max value allowed).
And for me I got 3 results when try with count=5&start=0. It tells me to get the next page with count=5&start=3, but when I try that, then I get no results. Next I tried count=5&start=5. No result there, either. But: My next attempt with count=5&start=10 returned a record. Just one, but more than none - and I kept increasing start by count over and over again. That worked until my start reached 3000 (sharesPerOwner?). I didn't get 3000 records, but ~1600.
So what I think is happening is that the API gathers records in a temp table. Something like this:
Records
start=0
start=3
start=5
start=10
Record 1
<
Record 2
<
Record 3
<
null
<
<
null
<
<
null
<
<
null
<
<
null
<
<
null
<
null
<
Record 5
<
null
<
null
<
null
<
null
<
But that table has "dead" records and is too stupid to deal with them.
TLDR;
My conclusion: Keep increasing start step by step (and work with a bigger count) until you have all the data you need. Don't let it confuse you with what seems to be the end of the data.

Related

How can I create a search with filters in Angular/Express/MongoDB?

I am trying to mimic the functionality of the right sidebar of this example for my Angular site.
I don't know what this is called, or even how to go about it on the front end or back end!
My assumption:
Create a form with values coming straight from the DB and only show the desired parameter (i.e. db.collection.find(query, {parameter: 1}) which will be called to update each time a user modifies the form. Additionally, the results would also be updated on selection (I have over 100MB of documents, returning ALL of them would be troublesome, how can I limit the number of documents returned to let's say 20 or 50 (user input?) and paginate it (1000 documents returned / 50 per page = 20 'pages')
Each input that is selected, a { 'field' : value } would be returned -- but I am not sure how to control an empty value (i.e. what if a user doesn't pick a fuel type or transmission range?)
How do I go about designing such a feature correctly?
1) In your query, use limit statement:
var options = { "limit": 20 }
collection.find({}, options).toArray(...);
2) you can validate user empty input (for eg. with express-validator):
req.checkBody('postparam', 'Invalid postparam').notEmpty()
req.getValidationResult().then(function(result) {
if (!result.isEmpty()) {
res.status(400).send('There have been validation errors: ' + util.inspect(result.array()));
return;
}
and based on result choose default value/pass error/render ask page for user

MongoDB find if all array elements are in the other bigger array

I have an array of id's of LEGO parts in a LEGO building.
// building collection
{
"name": "Gingerbird House",
"buildingTime": 45,
"rating": 4.5,
"elements": [
{
"_id": 23,
"requiredElementAmt": 14
},
{
"_id": 13,
"requiredElementAmt": 42
}
]
}
and then
//elements collection
{
"_id": 23,
"name": "blue 6 dots brick",
"availableAmt":20
}
{
"_id": 13,
"name": "red 8 dots brick",
"availableAmt":50
}
{"_id":254,
"name": "green 4 dots brick",
"availableAmt":12
}
How can I find it's possible to build a building? I.e. database will return the building only if the "elements" array in a building document consists of those elements that I have in a warehouse(elements collection) require less(or equal) amount of certain element.
In SQL(which from I came recently) I would write something likeSELECT * FROM building WHERE id NOT IN(SELECT fk_building FROM building_elemnt_amt WHERE fk_element NOT IN (1, 3))
Thank you in advance!
I wont pretend I get how it works in SQL without any comparison, but in mongodb you can do something like that:
db.buildings.find({/* building filter, if any */}).map(function(b){
var ok = true;
b.elements.forEach(function(e){
ok = ok && 1 == db.elements.find({_id:e._id, availableAmt:{$gt:e.requiredElementAmt}}).count();
})
return ok ? b : false;
}).filter(function(b){return b});
or
db.buildings.find({/* building filter, if any */}).map( function(b){
var condition = [];
b.elements.forEach(function(e){
condition.push({_id:e._id, availableAmt:{$gt:e.requiredElementAmt}});
})
return db.elements.find({$or:condition}).count() == b.elements.length ? b : false;
}).filter(function(b){return b});
The last one should be a bit quicker, but I did not test. If performance is a key, it must be better to mapReduce it to run subqueries in parallel.
Note: The examples above work with assumption that buildings.elements have no elements with the same id. Otherwise the array of elements needs to be pre-processed before b.elements.forEach to calculate total requiredElementAmt for non-unique ids.
EDIT: How it works:
Select all/some documents from buildings collection with find:
db.buildings.find({/* building filter, if any */})
returns a cursor, which we iterate with map applying the function to each document:
map(function(b){...})
The function itself iterates over elements array for each buildings document b:
b.elements.forEach(function(e){...})
and find number of documents in elements collection for each element e
db.elements.find({_id:e._id, availableAmt:{$gte:e.requiredElementAmt}}).count();
which match a condition:
elements._id == e._id
and
elements.availableAmt >= e.requiredElementAmt
until first request that return 0.
Since elements._id is unique, this subquery returns either 0 or 1.
First 0 in expression ok = ok && 1 == 0 turns ok to false, so rest of the elements array will be iterated without touching the db.
The function returns either current buildings document, or false:
return ok ? b : false
So result of the map function is an array, containing full buildings documents which can be built, or false for ones that lacks at least 1 resource.
Then we filter this array to get rid of false elements, since they hold no useful information:
filter(function(b){return b})
It returns a new array with all elements for which function(b){return b} doesn't return false, i.e. only full buildings documents.

Is there a way to return part of an array in a document in MongoDB?

Pretend I have this document:
{
"name": "Bob",
"friends": [
"Alice",
"Joe",
"Phil"
],
"posts": [
12,
15,
55,
61,
525,
515
]
}
All is good with only a handful of posts. However, let's say posts grows substantially (and gets to the point of 10K+ posts). A friend mentioned that I might be able to keep the array in order (i.e. the first entry is the ID of the newest post so I don't have to sort) and append new posts to the beginning. This way, I could get the first, say, 10 elements of the array to get the 10 newest items.
Is there a way to only retrieve posts n at a time? I don't need 10K posts being returned, when most of them won't even be looked at, but I still need to keep around for records.
You can use $slice operator of mongoDB in projection to get n elements from array like following:
db.collection.find({
//add condition here
}, {
"posts": {
$slice: 3 //set number of element here
//negative number slices from end of array
}
})
You can do this :
create a list for posts you want to have (say you want first 3 posts) and return that list
for doc in db.collections.find({your query}):
temp = ()
for i in range (2):
temp.push(doc['posts'][i])
return temp

search in limited number of record MongoDB

I want to search in the first 1000 records of my document whose name is CityDB. I used the following code:
db.CityDB.find({'index.2':"London"}).limit(1000)
but it does not work, it return the first 1000 of finding, but I want to search just in the first 1000 records not all records. Could you please help me.
Thanks,
Amir
Note that there is no guarantee that your documents are returned in any particular order by a query as long as you don't sort explicitely. Documents in a new collection are usually returned in insertion order, but various things can cause that order to change unexpectedly, so don't rely on it. By the way: Auto-generated _id's start with a timestamp, so when you sort by _id, the objects are returned by creation-date.
Now about your actual question. When you first want to limit the documents and then perform a filter-operation on this limited set, you can use the aggregation pipeline. It allows you to use $limit-operator first and then use the $match-operator on the remaining documents.
db.CityDB.aggregate(
// { $sort: { _id: 1 } }, // <- uncomment when you want the first 1000 by creation-time
{ $limit: 1000 },
{ $match: { 'index.2':"London" } }
)
I can think of two ways to achieve this:
1) You have a global counter and every time you input data into your collection you add a field count = currentCounter and increase currentCounter by 1. When you need to select your first k elements, you find it this way
db.CityDB.find({
'index.2':"London",
count : {
'$gte' : currentCounter - k
}
})
This is not atomic and might give you sometimes more then k elements on a heavy loaded system (but it can support indexes).
Here is another approach which works nice in the shell:
2) Create your dummy data:
var k = 100;
for(var i = 1; i<k; i++){
db.a.insert({
_id : i,
z: Math.floor(1 + Math.random() * 10)
})
}
output = [];
And now find in the first k records where z == 3
k = 10;
db.a.find().sort({$natural : -1}).limit(k).forEach(function(el){
if (el.z == 3){
output.push(el)
}
})
as you see your output has correct elements:
output
I think it is pretty straight forward to modify my example for your needs.
P.S. also take a look in aggregation framework, there might be a way to achieve what you need with it.

Random Sampling from Mongo

I have a mongo collection with documents. There is one field in every document which is 0 OR 1. I need to random sample 1000 records from the database and count the number of documents who have that field as 1. I need to do this sampling 1000 times. How do i do it ?
For people coming to the answer, you should now use the new $sample aggregation function, new in 3.2.
https://docs.mongodb.org/manual/reference/operator/aggregation/sample/
db.collection_of_things.aggregate(
[ { $sample: { size: 15 } } ]
)
Then add another step to count up the 0s and 1s using $group to get the count. Here is an example from the MongoDB docs.
For MongoDB 3.0 and before, I use an old trick from SQL days (which I think Wikipedia use for their random page feature). I store a random number between 0 and 1 in every object I need to randomize, let's call that field "r". You then add an index on "r".
db.coll.ensureIndex(r: 1);
Now to get random x objects, you use:
var startVal = Math.random();
db.coll.find({r: {$gt: startVal}}).sort({r: 1}).limit(x);
This gives you random objects in a single find query. Depending on your needs, this may be overkill, but if you are going to be doing lots of sampling over time, this is a very efficient way without putting load on your backend.
Here's an example in the mongo shell .. assuming a collection of collname, and a value of interest in thefield:
var total = db.collname.count();
var count = 0;
var numSamples = 1000;
for (i = 0; i < numSamples; i++) {
var random = Math.floor(Math.random()*total);
var doc = db.collname.find().skip(random).limit(1).next();
if (doc.thefield) {
count += (doc.thefield == 1);
}
}
I was gonna edit my comment on #Stennies answer with this but you could also use a seprate auto incrementing ID index here as an alternative if you were to skip over HUGE amounts of record (talking huge here).
I wrote another answer to another question a lot like this one where some one was trying to find nth record of the collection:
php mongodb find nth entry in collection
The second half of my answer basically describes one potential method by which you could approach this problem. You would still need to loop 1000 times to get the random row of course.
If you are using mongoengine, you can use a SequenceField to generate an incremental counter.
class User(db.DynamicDocument):
counter = db.SequenceField(collection_name="user.counters")
Then to fetch a random list of say 100, do the following
def get_random_users(number_requested):
users_to_fetch = random.sample(range(1, User.objects.count() + 1), min(number_requested, User.objects.count()))
return User.objects(counter__in=users_to_fetch)
where you would call
get_random_users(100)