AWS ElasticSearch shrink API node name - aws-elasticsearch

AWS Docs for AWS ElasticSearch have a section about the shrink API:
https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/aes-supported-es-operations.html#es_version_api_notes-shrink
The sample shows how to shrink:
{
"settings": {
"index.routing.allocation.require._name": "name-of-the-node-to-shrink-to",
"index.blocks.read_only": true
}
}
PUT https://domain.region.es.amazonaws.com/source-index/_settings
{
"settings": {
"index.routing.allocation.require._name": null,
"index.blocks.read_only": false
}
}
PUT https://domain.region.es.amazonaws.com/shrunken-index/_settings
{
"settings": {
"index.routing.allocation.require._name": null,
"index.blocks.read_only": false
}
}
Where do I get the name-of-the-node-to-shrink-to, source-index and shrunken-index?

For shrink operation:
All primary shards for the index must reside on the same node.
https://www.elastic.co/guide/en/elasticsearch/reference/master/indices-shrink-index.html
"name-of-the-node-to-shrink-to" parameter relocates the index’s shards to this particular node.
You can get the nodes name for your cluster using the cat nodes API:
GET /_cat/nodes?v&h=name
Next, your source-index in the index that you want to perform shrink operation on. This process creates a new target index which will be your shrunken-index
You will make the following API call to perform the shrink operation:
POST /my_source_index/_shrink/my_target_index
{
"settings": {
"index.routing.allocation.require._name": null,
"index.blocks.write": null
}
}
The other two calls that you see in the AWS documentation are for clearing the allocation requirement for the source-index and shrunken-index
You can also follow Elastic's guide on the same.

Related

How to query all languages from GitHubs graphql

I am trying to query GitHub for information about repositories using their v4 graphql. One of the things I want to query is the breakdown of all the languages used in the repo. Or if possible, the breakdown of the languages across all of a user's repos. I have tried the following snippet, but it returns null, where as primary language returns the primary language
languages: {
edges: {
node: {
name
}
}
}
The only thing I can find relating to languages is the primary language. But I would like to show stats for a user and the all languages they use either in a single repo or across off their repos.
You are missing the slicing field, here you can put first: 100 to get the first 100 languages for the repository:
{
user(login: "torvalds") {
repositories(first: 100) {
nodes {
primaryLanguage {
name
}
languages(first: 100) {
nodes {
name
}
}
}
}
}
}
If you want to have stats per language (eg if you want to know which is the second, third language etc...) I'm affraid this is not currently possible with the graphql API but using the List Languages API Rest for instance https://api.github.com/repos/torvalds/linux/languages
I wanted to point our something else that may help.
You can get more details about a language (i.e. primary, secondary etc) by looking at the language size. Comparing the totalSize for the whole repo to the size for each language it has.
The following query (example for pytorch) will get the data you need. Put it into the GH's GQL Explorer to check it out.
{
repository(name: "pytorch", owner: "pytorch") {
languages(first: 100) {
totalSize
edges {
size
node {
name
id
}
}
}
}
}
You will get an output of the form
{
"data": {
"repository": {
"languages": {
"totalSize": 78666590,
"edges": [
{
"size": 826272,
"node": {
"name": "CMake",
"id": "MDg6TGFuZ3VhZ2U0NDA="
}
},
{
"size": 29256797,
"node": {
"name": "Python",
"id": "MDg6TGFuZ3VhZ2UxNDU="
}
}, ...
To get % for each language just do size / totalSize * 100

Why am I able to bypass pagination when I call the same field twice (with different queries) in GitHub's GraphQL API

I noticed something I don't understand while trying to get the number of open issues per repository for a user.
When I use the following query I am asked to perform pagination (as expected) -
query {
user(login:"armsp"){
repositories{
nodes{
name
issues(states: OPEN){
totalCount
}
}
}
}
}
The error message after running the above -
{
"data": {
"user": null
},
"errors": [
{
"type": "MISSING_PAGINATION_BOUNDARIES",
"path": [
"user",
"repositories"
],
"locations": [
{
"line": 54,
"column": 5
}
],
"message": "You must provide a `first` or `last` value to properly paginate the `repositories` connection."
}
]
}
However when I do the following I actually get all the results which doesn't make any sense to me -
query {
user(login:"armsp"){
repositories{
totalCount
}
repositories{
nodes{
name
issues(states: OPEN){
totalCount
}
}
}
}
}
Shouldn't I be asked for pagination in the second query too ?
TLDR; This appears to be a bug. There's no way to bypass the limit applied when fetching a list of resources.
Limiting responses like this is a common feature of public APIs -- if the response could include thousands or millions of results, it'll tie up a lot of server resources to fulfill it all at once. Allowing users to make those sort of queries is both costly and a potential security risk.
Github's intent appears to be to always limit the amount of results when fetching a list of resources. This isn't well documented on the GraphQL side, but matches the behavior of their REST API:
Requests that return multiple items will be paginated to 30 items by default. You can specify further pages with the ?page parameter. For some resources, you can also set a custom page size up to 100 with the ?per_page parameter.
For connections, it looks like the check for the first or last parameter is only ran whenever the nodes field is present in the selection set. This makes sense, since this is ultimately the field we want to limit -- requesting other fields like totalDiskUsage or totalDiskUsage, even without a limit argument, is harmless with the regard to above concerns.
Things get funky when you consider how GraphQL handles selection sets with selections that have the same name. Without getting into the nitty gritty details, GraphQL will let you request the same field multiple times. If the field in question has a selection set, it will effectively merge the selection sets into a single one. So
query {
user(login:"armsp") {
repositories {
totalCount
}
repositories {
totalDiskUsage
}
}
}
becomes and is equivalent to
query {
user(login:"armsp") {
repositories {
totalCount
totalDiskUsage
}
}
}
Side note: The above does not hold true if you explicitly give one of the fields an alias since then the two fields have different response names.
All that to say, technically this query:
query {
user(login:"armsp"){
repositories{
totalCount
}
repositories{
nodes{
name
issues(states: OPEN){
totalCount
}
}
}
}
}
should also blow up with the same MISSING_PAGINATION_BOUNDARIES error. The fact that it doesn't means the selection set merging is somehow borking the check that's in place. This is clearly a bug. However, even while this appears to "work", it still doesn't get around whatever limits Github has applies at the storage layer -- you will always get at most 100 results even when exploiting the above bug.

How can I get branch count on a repository via GitHub API?

I'm working on a UI which lists all repositories of a given user or organization. This is using a tree format, where the first level is the repositories, and the second level of hierarchy (child nodes) are to be each branch, if expanded.
I'm using a mechanism that deliberately doesn't require me to pull a list of all branches of a given repo, because the API has rate limits on API calls. Instead, all I have to do is instruct it how many child nodes it contains, without actually assigning values to them (until the moment the user expands it). I was almost sure that fetching a list of repos includes branch count in the result, but to my disappointment, I don't see it. I can only see count of forks, stargazers, watchers, issues, etc. Everything except branch count.
The intention of the UI is that it will know in advance the number of branches to populate the child nodes, but not actually fetch them until after user has expanded the parent node - thus immediately showing empty placeholders for each branch, followed by asynchronous loading of the actual branches to populate. Again, since I need to avoid too many API calls. As user scrolls, it will use pagination to fetch only the page(s) it needs to show to the user, and keep it cached for later display.
Specifically, I'm using the Virtual TreeView for Delphi:
procedure TfrmMain.LstInitChildren(Sender: TBaseVirtualTree; Node: PVirtualNode;
var ChildCount: Cardinal);
var
L: Integer;
R: TGitHubRepo;
begin
L:= Lst.GetNodeLevel(Node);
case L of
0: begin
//TODO: Return number of branches...
R:= TGitHubRepo(Lst.GetNodeData(Node));
ChildCount:= R.I['branch_count']; //TODO: There is no such thing!!!
end;
1: ChildCount:= 0; //Branches have no further child nodes
end;
end;
Is there something I'm missing that allows me to get repo branch count without having to fetch a complete list of all of them up-front?
You can use the new GraphQL API instead. This allows you to tailor your queries and results to just what you need. Rather than grabbing the count and then later filling in the branches, you can do both in one query.
Try out the Query Explorer.
query {
repository(owner: "octocat", name: "Hello-World") {
refs(first: 100, refPrefix:"refs/heads/") {
totalCount
nodes {
name
}
},
pullRequests(states:[OPEN]) {
totalCount
}
}
}
{
"data": {
"repository": {
"refs": {
"totalCount": 3,
"nodes": [
{
"name": "master"
},
{
"name": "octocat-patch-1"
},
{
"name": "test"
}
]
},
"pullRequests": {
"totalCount": 192
}
}
}
}
Pagination is done with cursors. First you get the first page, up to 100 at a time, but we're using just 2 here for brevity. The response will contain a unique cursor.
{
repository(owner: "octocat", name: "Hello-World") {
pullRequests(first:2, states: [OPEN]) {
edges {
node {
title
}
cursor
}
}
}
}
{
"data": {
"repository": {
"pullRequests": {
"edges": [
{
"node": {
"title": "Update README"
},
"cursor": "Y3Vyc29yOnYyOpHOABRYHg=="
},
{
"node": {
"title": "Just a pull request test"
},
"cursor": "Y3Vyc29yOnYyOpHOABR2bQ=="
}
]
}
}
}
}
You can then ask for more elements after the cursor. This will get the next 2 elements.
{
repository(owner: "octocat", name: "Hello-World") {
pullRequests(first:2, after: "Y3Vyc29yOnYyOpHOABR2bQ==", states: [OPEN]) {
edges {
node {
title
}
cursor
}
}
}
}
Queries can be written like functions and passed arguments. The arguments are sent in a separate bit of JSON. This allows the query to be a simple unchanging string.
This query does the same thing as before.
query NextPullRequestPage($pullRequestCursor:String) {
repository(owner: "octocat", name: "Hello-World") {
pullRequests(first:2, after: $pullRequestCursor, states: [OPEN]) {
edges {
node {
title
}
cursor
}
}
}
}
{
"pullRequestCursor": "Y3Vyc29yOnYyOpHOABR2bQ=="
}
{ "pullRequestCursor": null } will fetch the first page.
Its rate limit calculations are more complex than the REST API. Instead of calls per hour, you get 5000 points per hour. Each query costs a certain number of points which roughly correspond to how much it costs Github to compute the results. You can find out how much a query costs by asking for its rateLimit information. If you pass it dryRun: true it will just tell you the cost without running the query.
{
rateLimit(dryRun:true) {
limit
cost
remaining
resetAt
}
repository(owner: "octocat", name: "Hello-World") {
refs(first: 100, refPrefix: "refs/heads/") {
totalCount
nodes {
name
}
}
pullRequests(states: [OPEN]) {
totalCount
}
}
}
{
"data": {
"rateLimit": {
"limit": 5000,
"cost": 1,
"remaining": 4979,
"resetAt": "2019-08-21T05:13:56Z"
}
}
}
This query costs just one point. I have 4979 points remaining and I'll get my rate limit reset at 05:13 UTC.
The GraphQL API is extremely flexible. You should be able to do more with it using less Github resources and less programming to work around rate limits.

API Mapping Templates with Serverless

When using http-event with serverless-framework multiple response status are created by default.
In case of error a Lambda returns an error message stringified in the errorMessage property, so you need a mapping template such as
$input.path('$.errorMessage')
for any status code you want to use. F.e.:
"response": {
"statusCodes": {
"200": {
"pattern": ""
},
"500": {
"pattern": ".*\"success\":false.*",
"template": "$input.path('$.errorMessage')"
}
}
}
But the serverless-framework does not create one by default, thus rendering the default status codes useless. If I would create a mapping template myself, the default response status would be overwritten by my custom ones.
What is the correct way of mapping with the default status codes created by the serverless-framework#1.27.3?

Best way to store inheritance type configuration in mongodb?

I store four different variations of configuration in mongo as a nested document.
default configuration
environment specific configuration.
site specific configuration.
page specific configuration.
Precedence : default < environment < site < page.
The configuration which has greater precedence can override the fields in the lower configuration.
For example :
default configuration:
{
"a":{
"b":"c"
}
"default":{
}
}
Env configuration:
{
"a": {
"b":"c"
}
}
Site configuration:
{
"a": {
"b":"c"
}
}
Page configuration:
{
"a": {
"b":"c"
}
}
So, whenever a configuration for a page is requested i have to query all four documents and merge them.
This would be the merged document if a configuration for a page is requested:
{
"a":{
"b":"c"
},
"default": {
}
}
So, if my page is requested a million times, then for each page requested i should fetch all four documents and merge based on their precedence.
How to best store this type of data,how can i remove four db calls for every page load.Is there a better way to store such configuration data?