I tried using pybliometrics to get info from Scopus on publications from my institution, but got different results from the AffiliationSearch and the AffiliationRetrieval.
I got the ID and other info with AffiliationSearch, including the number of documents:
[Affiliation(eid='10-s2.0-60092193', name='Instituto de Biología Molecular y Celular de Rosario', variant='Instituto De Biología Molecular Y Celular De Rosario', documents=1171, city='Rosario', country='Argentina', parent='0')]
Now, when I retrieve the affiliation info with AffiliationRetrieval, the number of documents appears to drop substantially from 1171 to 221
Instituto de Biología Molecular y Celular de Rosario in Rosario in Argentina, has 116 associated author(s) and 221 associated document(s) as of 2023-02-16
When I get to the Scopus Affiliation Link, obtained from the same AffiliationRetrieval object, the page informs again 1171 documents
What am I doing wrong?
Thank you in advance
This is mysterious. All APIs, incl. Scopus Search API, return the number 1171. Only Affiliation Retrieval API returns a different number.
I tried with my own institution (60105007). The number obtained from Affiliation Retrieval API differs from the other APIs, but it is a little higher.
This is in any case not a problem with your code or with pybliometrics. Maybe consult Sopus directly? Could be a mistake in the Affiliation Retrieval API.
Related
So I am trying to download from openstreetmap the list of suburbs in Australia. However I am noticing that they don't seem to store the data in away that would be classified as search friendly.
for example the properties part if the following suburb only includes the local_name, it does not include the state - technically it does by referring to ref:psma:loc_pid
Howeever it's not search friendly because no one is going to search loc_pid numbers - Also it would require another database with loc_pid matches, which I don't see as helpful.
"properties":{"osm_id":-11690275,"boundary":"administrative","admin_level":10,"parents":"-11690291,-2316598,-80500","name":"Kimbolton","local_name":"Kimbolton","name_en":null,"all_tags":{"name":"Kimbolton","place":"locality","boundary":"administrative","way_area":"0.585159","wikidata":"Q55772085","admin_level":"10","ref:psma:loc_pid":"WA1703"}}}
So how can I get boundary information with State and Country codes.
does any one has experience with using Custom Audience API in order to create audience for contact data of people outside of U.S.?
Part of the API which i refer to is: https://developers.facebook.com/docs/marketing-api/reference/custom-audience/users/
As you can see, documentation lacks of explanation in what format should be data for COUNTRY field provided. Also I don't know how to format data for ST field if the country is outside of US. Should I use some region name or just leave it blank?
I uploaded 1000 users and nobody has been recognized so I suppose that I just did not format data properly.
Sample request: payload={"schema":["FN","LN","CT","ST","COUNTRY"],"is_raw":true,"data":[["adam","nowak","wroclaw","","poland"]]}&access_token=____&format=json
As you can see I left state field blank, and provided country in English translation and lower caps. It is possible that valid value is e.g. PL a not "poland". Also any information about what should be filled in state field would help me a lot.
Thanks in advance.
I'm looking for some best practices when it comes to modeling confidential hierarchical data in general and specifically with DynamoDB.
The scenario is best explained with an example:
Let's say we have a number of users. Each user has a number of products. Each product consists of a number of parts.
Typical use cases:
List all products for a given user
List all parts for a given product
So far I have modeled this in DynamoDB like this:
Users
----------------
HashKey: UserId
Products
-------------------
HashKey: UserId
RangeKey: ProductId
Parts
-------------------
HashKey: ProductId
RangeKey: PartId
The data is confidential and accessed through authenticated REST endpoints where an authentication token can be mapped to a UserId. Each user may be allowed to view other users' data through some group concept.
Listing all products for a given user is simple since UserId is a key in the products table:
GET /users/111/products becomes a simple Query(Table=Products, UserId=111)
But consider the case of listing all parts for a given product:
GET /users/111/products/222/parts
If I simply do a Query(Table=Parts, ProductId=222) then I will get the desired data fast, but I am not protecting against other users querying for data belonging to user 111, provided they somehow know about ProductId 222 (in reality, ID:s will of course be UUID:s or similar so not so easily guessable):
GET /users/119/products/222/parts
... would result in malicious user 119 retrieving data that doesn't belong to him, provided nothing is done to address this.
So here I imagine I need to do something like one of these:
First make another query to make sure product 222 in fact belongs to the given user
Duplicate the UserId in the Parts table and include it in the query condition (which basically means it will match either all rows or no rows when scanning through the set identified by ProductId): Query(Table=Parts, ProductId=222, UserId=111)
Use UserId as the hash key also in the Parts table and instead keep ProductId as a secondary index
Use a composite HashKey such as UserId_ProductId ("111_222") on the Parts table
If I need to return a 401 as opposed to just empty data, option 1 seems like the only approach. But if we imagine a deeper hierarchy of data, e.g. "users having inboxes having messages having parts having attachments" it seems this approach could eventually be expensive (listing all attachments for part P might result in a query to check that part P belongs to message M, that message M belongs to inbox I and that inbox I belongs to user U, and so on).
Does anyone have any good arguments for which approach is most favorable? Or am I doing something stupid and should be modeling my data in some other way completely?
So I have a form I have Vendors fill out when they want to ship to us. It's an excel form that I then import into Access so I can run reports. Sometimes when they send the form back it's in a format in which I have to manually enter the data into our database.
The form looks like this:
The middle section is just for example purposes so it's a rectangle with text in it.
So everything seemed simple enough until I got to the middle section. See in my excel form I have a section for multiple PO's and units. So essentially each shipment can have one to many PO's and Units. Currently I can approach this task with the redundant method of reentering information per PO on the form. But I want to make this simple.
So the task at hand is that I want to have a form field for PO's and Units where I can input multiple lines of information so that when I hit a submit button. It appears in the database on separate lines with the same vendor information.
So if I filled out my form had this in the middle section:
PO | Units
111111 22
222222 33
333333 44
When I hit submit I want it to attach the rest of the forms information to each PO on separate lines so it'd be like:
Vendor | City | State | PO | Units
Nike Memphis TN 111111 22
Nike Memphis TN 222222 33
Nike Memphis TN 333333 44
So how would I go about accomplishing this task?
From your description of the problem and your example of how the data appears to ultimately be stored in Access it looks to me like you are using Access as a spreadsheet and not as a database. This is ok, but you might want to consider normalizing the data to take advantage of the power of databases in general.
For example:
Create a Vendors table whose sole purpose is to keep details about each Vendor you work with. A very basic implementation would have an ID field to uniquely identify each vendor and a Name field for the vendor name.
If Vendors will only ever have a single location you could also store City, State, ZipCode and Email in this same Vendor table, but I suspect having a separate VendorLocation or VendorAddress table would be a better fit long term.
Create a VendorShipment table that tracks the higher level information on your mockup, such as:
ShipmentID (primary key of this table)
VendorID (foreign key back to Vendor table)
Ready Date
Carrier
Estimated Cost
FreightClass
Tracking #
Estimated Transit Time
Finally, create a VendorShipmentDetail table that tracks the information of each shipment, including:
ShipmentDetailID (primary key of this table)
ShipmentID (foreign key back to VendorShipment table)
PO
Units
Any other details that you want to or need to track
Organizing and storing the data in a normalized fashion would ultimately help simplify your data entry \ data management process and potentially make for a better user experience.
For example, rather than having to enter the Vendor Name, Address information, etc. each time you could instead use a combo box control that is tied to the Vendor table. If the Vendor exists in the table you select it from the list and you already have the Address information, no need to re-enter it each time. If the Vendor did not already exist you enter it once (probably on a Vendor screen where you maintain the details for each Vendor) and draw upon the information in the future.
You would then use queries to tie the information back together for reporting purposes (de-normalize the information).
The art of database design can take a while to pick up, but a good starting point might be to check out the Northwind database that Microsoft has maintained over the years. It has some examples you could draw from immediately to get a practical understanding of how to use normalization within Access. You can find more information here: http://office.microsoft.com/en-us/templates/northwind-sales-web-database-TC101114818.aspx
I'm trying to read name, card number, expiry date etc on Credit Card. but always return 6d00 when call SCardTransmit.
I'm using pre-define AID, which i have googled to be valid (correct me if i'm wrong). here's the are :
AID_LIST = {
"A0000000421010",
"A0000000422010",
"A0000000031010",
"A0000000032010",
"A0000000041010",
"A0000000042010",
"A00000006900",
"A0000001850002"
}
Thanks in advance.
I am not familiar with this API you are using, but you will have to send the following sequence of APDU commands:
SELECT PSE (for contact card), specified by EMV in Book 1, 11.3. An example is "00A404000E315041592E5359532E444446303100"
With the SFI returned, you can read the records to find out the supported AIDs. But, you can do this by "trial and error" using the pre-defineds AID that you specified and call SELECT AID, following the guidelines on Book 1, 12.3.3.
You may either call the command "GET PROCESSING OPTIONS" to see what records are available to read or you can read all possible records calling the "READ RECORD" command making a scan of the possible records. In one of those records, you will have the data you are looking for.
Usually in the same record you will have stored the Cardholder name, PAN and Track 2 discretionary data (in which is contained the expiration date).
The tags are listed in Book 3.
Application Primary Account Number (PAN) - 5A
Cardholder name - 5F20
Track 2 Discretionary Data - 9F20
Usefull info about Track 2:
http://en.wikipedia.org/wiki/Magnetic_stripe_card
A sample of the sequence above:
http://code.google.com/p/javaemvreader/wiki/ExampleOutput
EMV Specs:
http://www.emvco.com/specifications.aspx?id=223
The possible return codes, such as 61XX, 9000, etc are listed in ISO 7816. Here's a good overview: http://www.cardwerk.com/smartcards/smartcard_standard_ISO7816-4_5_basic_organizations.aspx
You need to lookup/buy ISO 7816, the EMV specifications and your vendors card specifications otherwise you don't know what you are doing.