I am using RESTHeart to access a Mongo database. RESTHeart has a an API that is supposed to create a database, e.g.:
curl -X put http://localhost:8080/db1
Well, I was using a chrome browser-based REST client that happened to do the equivalent of the follow curl call, but I accidentally forgot to nuke the data portion. It contained the JSON {"e":"f"} for data.
curl -X put -H 'Content-Type: application/json' --data-raw '{"e":"f"}' http://localhost:8080/db2`
When I then tried to do a curl get, it returns a value with the key/value pair "e":"f" stuffed in there - which is not what I want.
$ curl http://localhost:8080/db2
... { "_id" : "db2" , "e" : "f" , "_etag" : { "$oid" : "570f90601d956327e8df28c4"} , "_size" : 0 , "_total_pages" : 0 , "_returned" : 0}
Now, using the Mongo shell, I try to find this key/value pair using just about every Mongo shell command. But, I can't find it, nor can I remove it either. In fact, I can create a rather large Mongo database, then do that curl put, and I'm screwed, but it then adds the pair to my nice clean database.
Does anyone know how I can remove that strange key/value pair, either using Mongo shell, or the RESTHeart API - short of nuking the database and recreating it from scratch?! Thanks.
To remove the db property just update the db:
With PATCH:
PATCH /db {"$unset": {"e": null}}
Or with PUT
PUT /db {}
For more info look at the documentation reference sheet and representation format
Related
I'm trying to insert one registry in mongodb with mongosh and ubuntu bash. I've get one registry with mongosh . I have to edit 3 fields and make an insert. I thought to make the edition with jq but I don't get it done.
{ "_id": {"fileName": "xxxxxx","namespace": "yyyyyy" },
"metainfo": {"file-type": "csv","environment": "int",
"creation-date": 1672306975130000000,"file-name":"xxxxxxx" }
}
I've to edit creation-date (is the date en nanos), the enviroment, change part of the fileName (make a substring). I've get the document with --eval "EJSON.stringlify(....)"
the command with jq I've tried is:
newDocument=$(echo "$fileData" | jq '.metainfo.environment |= "pro"')
and gives me error:
parse error: Invalid numeric literal at line 1, column 8
I've validated the JSON and it's well formed.
After made the changes I've to make Then make the insert. I've thought made with
--eval "......insertOne(EJSON.stringlify($newDocument))"
is this correct? What would be the best mannerto do all this?
Thanks for all.
The error was giving me because I was making the request without --quiet parameter.
The mongo shell allows json without problems.
While trying to establish a list of incoming GitHub commits I've stumbled accross the GitHub rate api limits, of 60 calls per hour. As explained in this answer, one can get the lists of branches with an API call using:
https://api.github.com/repos/{username}/{repo-name}/branches
However, that triggers the rate limit for the average GitHub organisation/user. So I thought I'd try a different approach, using RSS/atom format. However, as that same answer explains, the atom format/rss feed seems to depend on the user having a list of all branches in a repository. This question asks for an overview of all commits in a repository, yet instead it is given an answer for all commits in the default branch of the repository. And this question receives a working answer that triggers the rate limit, as it relies on at least 1 API call per repository.
Hence, I would like to ask: How could one get a list of all branches of a GitHub user, using at most 1 GitHub API call?
Note, using atom views would be perfectly fine, however, I have not found an atom view like: https://github.com/:owner/:repo/commits.atom or https://github.com/:owner/:repo/branches.atom that displays all branches in a repository. I would strongly prefer a solution that does not rely on a third party like: https://rsshub.app/github/repos/yanglr as I imagine, they too will at some point start rate-limiting.
My current approach is to scrape the source code of https://github.com/:user/:repo/branches using bash. However, I imagine there might exist a more efficient solution to this.
MWE
Thanks to the comments, I was ble to find a bash MWE to perform a GraphQL query using terminal. It is given in this answer, where bearer is not a variable, it is the means of identification and the ...... should be your personal GitHub Access token. I am currently looking into how to get the repositories beyond the 1st hundred. Then I'll look at how to get the branches of those repositories.
Attempt I
The following query yields a json with the repositories and first 4 branches in each repository of a user!
name:examplequery.gql.
query {
repositoryOwner(login: "somegithubuser") {
repositories(first: 40) {
edges {
node {
nameWithOwner
refs(
refPrefix: "refs/heads/"
orderBy: { direction: DESC, field: TAG_COMMIT_DATE }
first: 4
) {
edges {
node {
... on Ref {
name
}
}
}
}
}
}
}
}
}
Next, a bash script is made that runs the query:
#!/usr/bin/env bash
# Runs graphql query on GitHub. Execute with:
# ./run_graphql_query.sh examplequery1.gql
GITHUB_PERSONAL_ACCESS_TOKEN_GLOBAL="your_github_personal_access_token"
if [ $# -ne 1 ]; then
echo "usage of this script is incorrect."
exit 1
fi
if [ ! -f $1 ];then
echo "usage of this script is incorrect."
exit 1
fi
# Form query JSON
QUERY=$(jq -n \
--arg q "$(cat $1 | tr -d '\n')" \
'{ query: $q }')
curl -s -X POST \
-H "Content-Type: application/json" \
-H "Authorization: bearer $GITHUB_PERSONAL_ACCESS_TOKEN_GLOBAL" \
--data "$QUERY" \
https://api.github.com/graphql
It can be ran with:
./run_graphql_query.sh examplequery1.gql
There are two more issues to resolve before I can answer the question. How I can iterate over all repositories instead of only the first 100. How I can parse the json into a list of branches per repository.
I have around 6 million rows in my mongodb collection and importing into meilisearch using php artisan scout:import 'model' takes forever to finish.
Importing data with limit option php artisan scout:import 'model' -c 10000 gives me the following error.
MongoDB\Exception\InvalidArgumentException
Expected "limit" option to have type "integer" but found "string"
at vendor/mongodb/mongodb/src/Exception/InvalidArgumentException.php:59
55▕
56▕ $expectedType = $typeString;
57▕ }
58▕
➜ 59▕ return new static(sprintf('Expected %s to have type "%s" but found >"%s"', $name, $expectedType, get_debug_type($value)));
60▕ }
61▕ }
62▕
+27 vendor frames
28 artisan:37
Illuminate\Foundation\Console\Kernel::handle()
I also tried exporting the collection as json from mongodb and manual importing into meilisearch using curl -X POST 'http://127.0.0.1:7700/indexs/posts/documents' / --data #/data/posts.json gives the following error.
{"message":"Invalid JSON: invalid type: map, expected a sequence at line 1 column 0","errorCode":"bad_request","errorType":"invalid_request_error","errorLink":"https://docs.meilisearch.com/errors#bad_request"}curl: (3) URL using bad/illegal format or missing URL
Posts.json is the exported json file of mongodb collection using mongoexport command.
How can I import data fast into meilisearch?
Versions
"laravel/scout":"^9.1"
"laravel/framework": "^8.12",
"meilisearch/meilisearch-php": "^0.18.2",
mongodb version : "3.6"
OS
Ubuntu 20.04
After a year of going through a boring CLI to interact with my data with mongo client. I found out the best tool i wish i would have get at first time. MongoDb Compass.
After going through all the features, I found the similarity beetween this tools and PhpMyadmin. My question are.
How can i view all the query i have executed just like Phpmyadmin console.
Is it possible to Export all the query and or Import query to compass just like PhpMyadmin.
Thanks.
Compass is not PhpMyAdmin. PhpMyAdmin allows you to enter and run queries whereas Compass is more like a wrapper around find, and although it won't give you the query to run you can easily build the query yourself.
Take the following example:
I could build this query like so:
db.users.find({ username: 'jim' }, { password: 0 })
.sort({ created_at: -1 })
.skip(2)
.limit(1)
To export the result of this query you can use mongoexport. Sadly you can't use the above query for this but you will have to add a separate argument for each section. You should also note that in the above I exclude password, but with mongoexport you are unable to exclude fields - you can only specify which fields to include.
mongoexport -d test -c users -q '{ "username": "jim" }, { "password": 0 }' --fields='username,created_at' --sort '{ "created_at": -1 }' --skip 2 --limit 1 --out exported_users.json
I want to search for a filename pattern across entire JFrog ARM
without knowing the explicit repository name in the JFrog cli.
jfrog rt s "reponame/*pattern*"
is giving the results as expected in a specific repo.
But I have repo1, repo2, repo3, ... so on.
How do I search using wildcard for reponame, below is not working.
jfrog rt s "*/*pattern*"
Basically I want the jfrog cli equlivalent of the curl GET request search
"https://server/artifactory/api/search/artifact?name=*pattern*"
This is not for cli client, but an alternative way to get desired feature. Spent some time looking at API here:
https://www.jfrog.com/confluence/display/RTF/Artifactory+REST+API
I recommend to scroll down that page slowly and read in entirety as a lof of possible commands, syntax is excellent, I executed a few searches and they searched all local repositories. No need to recursively search 1 by 1. Command syntax:
export url="http://url/to/articatory"
curl --noproxy '*' -x GET "$url/api/search/artifact?name=log4j*"
Read link above for more granular search options/syntax.
How I set it up:
alias artpost='curl -X POST "http://url/artifactory/api/search/aql" -T - -u admin:password'
Some example usage:
echo 'items.find({"name": {"$match" : "log4j*"}})' | artpost
echo 'items.find({"$and" : [{"created" : {"$gt" : "2017-06-12"}},{"name": {"$nmatch" : "*surefire*"}}]})' | artpost