Osmium - How to keep only some tag names? - openstreetmap

I am using osmium tool in order to filter some data from a planet.osm.pbf file in order to inject that data into a Nominatim docker.
Here is my command: osmium tags-filter planet-latest.osm.pbf place=city,town,village -o planet-light.osm.pbf --overwrite
This is working as expected. However, it keeps a lot of useless data for me. For example, it keeps all the translations of the places.
Sample for Paris:
"namedetails": {
"ref": "75",
"name": "Paris",
"name:af": "Parys",
"name:am": "ፓሪስ",
"name:an": "París",
"name:ar": "باريس",
"name:ba": "Париж",
"name:be": "Парыж",
"name:nl": "Parijs",
"name:no": "Paris",
"name:oc": "París",
"name:or": "ପ୍ୟାରିସ",
"name:os": "Париж",
"name:pa": "ਪੈਰਿਸ",
"name:pl": "Paryż",
"name:ps": "پاريس"
...
}
As I have a lot of records in my DB, all those translations are taking a lot of place and I do not need them.
Is there a way to only keep some tags? For example, I would like to keep only the following tags: name, name:en, name:fr.

Related

Firebase Error in Retrieving - Syntax Error

Stumbled on something odd and was wondering for some input. I have a dynamic child naming system and want to retrieve data. Now, when I try to input the dynamic naming of child it does not retrieve it however when I do it manually it does. I have debugged via print, and it seems to be working as well as other forms of observation. Any clue?
Assuming test = code test (to make it easier). Could it be the way the statements are wrote? The working one uses queryOrdered where as the other uses observe?
Working Type:
let refPosts = Database.database().reference().child("postsof"+"\(test)").queryOrdered(byChild: "username").queryEqual(toValue: "\(result)")
Does Not Work:
Database.database().reference().child("postsof"+"\(test)").observe(DataEventType.value)
Works Manually:
Database.database().reference().child("postsoftest").observe(DataEventType.value)
Edit:
Print: For all print statements, it shows as "postsoftest". This confirms that for the second example, "postsof"+" \ (test)" is being read correctly on the debugger as postsoftest, however does not retrieve the data as the other 2 examples.
JSON:
"postsoftest": {
"autoID": {
"name": "jack"
"link": "..."
}
},

How to get all-contributers github app to render table of contributors and badge?

I'm using https://github.com/all-contributors/all-contributors, and I've gone through every detail I can on their documentation https://allcontributors.org/ as well. And been trying different things for the past 2 hours but I can't get this app to render the table of contributors. Their documentation is incredibly poor.
I have:
{
"files": [
"readme.md",
"docs/authors.md",
"docs/contributors.md"
],
"imageSize": 100,
"contributorsPerLine": 7,
"contributorsSortAlphabetically": false,
"badgeTemplate": "[![All Contributors](https://img.shields.io/badge/all_contributors-<%= contributors.length %>-pink.svg)](#contributors)",
"contributorTemplate": "<img src=\"<%= contributor.avatar_url %>\" width=\"<%= options.imageSize %>px;\" alt=\"\"/><br /><sub><b><%= contributor.name %></b></sub>",
"types": {
"contributor": {
"symbol": "❤️",
"description": "Contributor ❤️",
"link": "[<%= symbol %>](<%= url %> \"<%= description %>\"),"
}
},
"skipCi": true,
"contributors": [],
"projectName": ".github",
"projectOwner": "owner",
"repoType": "github",
"repoHost": "https://github.com"
}
I then use #all-contributors add #somename to code and it does correctly add it to the .all-contributersrc file, however it doesn't render the table in the readme.md.
I've also tried to hardcode the list in readme using:
<!-- ALL-CONTRIBUTORS-LIST:START -->
<!-- ALL-CONTRIBUTORS-LIST:END -->
But no luck, nothing happens.
I also can not get the badge working. I can hardcode the badge and display "a" badge, but it never uses the above template badge with dynamic contributor length. So it's also not injecting the badge or using that template badge at all.
How can I get this bot to correctly show the badge and show the contributor list and generate the table in readme.md?
Note: I'm not interested in using node or running some generate command manually locally, then it defeats the point of using that app at all, then I can just as well do it myself. According to their documentation it should be generating the table automatically on first contributor, but it does not.
Also when I go to https://raw.githubusercontent.com/all-contributors/all-contributors/master/README.md in raw view, I can see:
Which tells me it has to be generating that table somehow, but it doesn't seem to work.
You can try and emulate other repositories using the same bot, like:
AtlasFoundation/AvatarCreator with this commit.
mrz1836/go-nownodes and its own .all-contributorsrc file
As an alternative, the estruyf/vscode-front-matter does include its own contributor list, using this commit which calls contrib.rocks.

Is it recommended to keep all scenarios response data into external files and reading inside of code instead feature files?

Please help me understand the best usage of BDD and feature files.
I have Rest calls and need to validate response data. Can I give expected response data from feature file as mentioned below?
Then response includes the following in any order:
| data[].username | 1111 |
| data[].phoneNumbers[].phoneNumber | 122-222-2222 |
| data[].retailLocationRoles[*].securityRoleId | 10 |
Otherwise should I keep expected response data (table data as mentioned above) in external files and reading inside of code? Is this a best practice ?
If expected response data changes in the future, is it a good idea to change inside feature files? Or do we need to follow TDD process?
Some one suggested to me to keep data in external files rather than feature files and read external file data inside of code, and saying it's not a good idea to change feature file when response data changes.
Thanks for reading.
It is totally up to you. If you read from external files, you can re-use them in multiple features. If you don't need to re-use, keep them in-line. And please don't worry about "BDD" you can ignore it.
One more advantage of keeping JSON files external is that you can open them in a JSON editor.
Don't over-think your tests, just get started and you can easily evolve later once you understand the concepts.
Since you seem to be looking only for specific items, a normal match should be sufficient:
* def response = { data: [ { username: '1111', phoneNumbers: [ '122-222-2222' ], retailLocationRoles: [ { securityRoleId: 10 } ] } ] }
* def phone = '122-222-2222'
* def role = { securityRoleId: 10 }
* def user = { username: '1111', phoneNumbers: '#(^phone)', retailLocationRoles: '#(^role)' }
* match response.data contains user
If you want you can re-use the user object above, by a call to a JS file or a feature file.

How we can make cypress scripts easily maintainable like POM in other tools like selenium

This is just a general clarification about building framework using cypress.io.
In cypress can we write a test framework like page object model in selenium?
These model make our life easy to maintain tests.
For eg if ID or class of a particular element which is used across multiple tests /files has changed with a new version of Application-In cypress it is hard to go to multiple test files/tests and change the ID right?
Can we follow the same page object model concept like declaring all elements as variables in each page and use the variable names in tests/functions?
Also can we reuse these variables across different test .js files ?
If yes - can you please give a sample
Thanks
I have seen only a few people using POM concept while creating an automation framework using Cypress. Is that advisable to follow POM model, it depends on reading the following link from team. I would say this may depend upon automation tools/ architecture. According to Cypress team this is not recommendable, may be a debatable topic, read this: https://www.cypress.io/blog/2019/01/03/stop-using-page-objects-and-start-using-app-actions/#
We can declare the variable names in Cypress.env.json file or cypress.json file like below:
{
"weight": "85",
"height": "180",
"age": "35"
}
Then if you want to use them in a test-spec, create a new variable and receive it like below in test-spec.
const t_weight = Cypress.env('weight');
const t_height = Cypress.env('height');
Now you can use the variable in respective textbox input of pages as below:
cy.get('#someheighttextfieldID').type(t_weight);
cy.get('#someweighttextfieldID').type(t_height);
or receive it directly;
cy.get('#someweighttextfieldID').type(Cypress.env('weight'));
example:
/* declare varaibles in 'test-spec.js' file*/
const t_weight = Cypress.env('weight');
const t_height = Cypress.env('height');
//Cypress test - assume below test to test some action and receive the variable to text box
describe('Cypress test to receive variable', function(){
it('Cypress test to receive variable', function(){
cy.visit('/')
cy.get('#someweighttextfieldID').type(t_weight);
cy.get('#someheighttextfieldID').type(t_height);
//even receive the variable straight away
cy.get('#someweighttextfieldID').type(Cypress.env('weight'));
})
});

Extend multiple sources / indexes

I have many web pages that are clones of each other. They have the exact same database
structure, just different data in different databases (each clone is for a different country so everything is
separated).
I would like to clean up my sphinx config file so that I don't duplicate the same queries
for every site.
I'd like to define a main source (with db auth info) for every clone, a common source for
every table I'd like to search, and then sources&indexes for every table and every clone.
But I'm not sure how exactly I should go about doing that.
I was thinking something among this lines:
index common_index
{
# charset_type, stopwords, etc
}
source common_clone1
{
# sql_host, sql_user, ...
}
source common_clone2
{
# sql_host, sql_user, ...
}
# ...
source table1
{
# sql_query, sql_attr_*, ...
}
source clone1_table1 : ???
{
# ???
}
# ...
index clone1_table1 : common_index
{
source: clone1_table1
#path, ...
}
# ...
So you can see where I'm confused :)
I though I could do something like this:
source clone1_table1 : table1, common_clone1 {}
but it's not working obviously.
Basically what I'm asking is; is there any way to extend two sources/indexes?
If this isn't possible I'll be "forced" to write a script that will generate my sphinx config file to ease maintenance.
Apparently this isn't possible (don't know if it's in the pipeline for the future). I'll have to resort to generating the config file with some sort of script.
I've created such a script, you can find it on GitHub: sphinx generate config php