Remote avatars for users/rooms not loading after migrating Synapse - matrix-synapse

Migrated perthchat.org recently and i've run into the same bug where remote user/room avatars stop loading completely. So i figured okay i'll just purge the remote media until the future again:
curl -X POST --header "Authorization: Bearer long-access-token" '172.18.0.5:8008/_synapse/admin/v1/purge_media_cache?before_ts=1626710400'
Strangely this hasn't resolved the issue :P none of the remote avatars want to load. Also going this media purge hasn't freed up any space either, here's the before:
"msg": [
"381M\t/matrix/synapse/storage/media-store/remote_content",
"5.3G\t/matrix/synapse/storage/media-store/remote_thumbnail"
]
and the after:
"msg": [
"381M\t/matrix/synapse/storage/media-store/remote_content",
"5.3G\t/matrix/synapse/storage/media-store/remote_thumbnail"
]
Should also note the backup before this migration only copied over the /matrix/synapse/storage/media-store/remote_thumbnail folder, not the /matrix/synapse/storage/media-store/remote_content folder.
Anyone know anything else i can try to get remote user/room avatars loading again?

The solution for this was to purge the remote media to a future date, don't forget the epoche time should be in milliseconds, not seconds!
# Example:
# $ date --date '149 days ago' +%s
# 1589442217
# $ curl -X POST --header "Authorization: Bearer ACCESS_TOKEN" 'https://matrix.perthchat.org/_synapse/admin/v1/purge_media_cache?before_ts=1589442217000'

Related

Testrigor CI/CD RESULT:OK Dont trigger test

Team
Im using the script on CI/CD tab for the integration with Github Actions (Ubuntu-latest)
The issue is when i try to run labeled test all the request give me:
{
"result": "ok",
"queueId": "QUEUEID"
}
But there is not any test running after make that request, So i try testing this things
1.- Run the without labels to run all test (It Works)
2.- Then i try with a one labeled test like
"labels":[ "mobile" ] or "labels":"mobile"
Give me the same result as upper example result: ok but does'nt trigger any test
2.1.- Try with a 2 labeled test
"labels":[ "mobile" , "login"]
and still the same.
So, there is any way that i can see the logs from your side, or any way that i can check why its not triggering the test
Some examples that i've been trying
curl -X POST \
-H 'Content-type: application/json' \
-H 'auth-token: TR_AUTH_TOKEN' \
--data '{ "fileUrl":"URL_TEST_TO_A_PUBLIC_CLOUD", "labels":["Mobile","Login" ] }' \
https://api.testrigor.com/api/v1/apps/TEST_CASES_ID/retest
curl -X POST \
-H 'Content-type: application/json' \
-H 'auth-token: TR_AUTH_TOKEN' \
--data '{ "fileUrl":"URL_TEST_TO_A_PUBLIC_CLOUD", "forceCancelPreviousTesting":true, "storedValues":{"storedValueName1":"Value"}, "labels":["Mobile","Login" ] }' \
https://api.testrigor.com/api/v1/apps/TEST_CASES_ID/retest
PD: This is for android tests
The tests will trigger eventually. Once you do the API call the system needs to download your APK, then start the emulator, upload APK to the emulator and start the APK.
This all takes a bit of time and your task might be in queue if you are using a free version of testRigor.
When you trigger a retest and you specify certain labels to be considered, you won't see the run in the main view, there you can see only the last run that was triggered for all the test cases in the suite. Label/branch tests will be listed in All Runs option on the left menu.

Getting "not found" after authenticating when trying to initiate GitHub workflow via REST

I am trying to trigger the workflow_dispatch action for a GitHub workflow via REST but I am getting a "not found" error.
My question is similar to this one but the difference is that I am still getting the "not found" error even though the header indicates I am authenticated (the rate limit has increased to 5,000).
Here's my script:
#!/bin/bash
# https://docs.github.com/en/rest/reference/actions#create-a-workflow-dispatch-event
OWNER='myGithubOrganization'
REPO='myRepo'
WORKFLOW_ID='main.yml'
POST_URL="https://api.github.com/repos/$OWNER/$REPO/actions/workflows/$WORKFLOW_ID/dispatches"
echo "Calling $POST_URL"
GITHUB_PERSONAL_ACCESS_TOKEN=$(echo "$PLATFORM_VARIABLES" | base64 --decode | jq '.GITHUB_PERSONAL_ACCESS_TOKEN' --raw-output)
# -i to include headers.
curl \
-i \
-X POST \
-H "Accept: application/vnd.github.v3+json" \
-H "Authorization: token $GITHUB_PERSONAL_ACCESS_TOKEN" \
$POST_URL \
-d '{"ref":"ref"}'
In the headers, I see the rate limit has increased to 5,000, so I know I am logged in.
The personal access token has the following permissions:
repo
workflow
admin:org_hook
The personal access token is for a machine user.
In the repo settings, under "Collaborators and teams", the machine user account has the "Read" role.
What more do I need to do to trigger the workflow?
The machine user needs to have write access, not read access.
This is true even if the workflow does something like run CI tests and does not write any code.

Can't download using Nexus 3 REST API and CURL

I'm trying to download the latest snapshot dependency of a zip on Nexus 3 (version 3.22.1-02) from a command line using curl:
curl -u username:password -X GET "https://mynexusserver/service/rest/v1/search/assets/download?sort=version&repository=snapshotsrepo&maven.groupId=mygroup&maven.artefactId=myartefact&maven.extension=zip" -H "accept: application/json" -o myartefact.zip
This request is similar to this example: http://help.sonatype.com/repomanager3/rest-and-integration-api/search-api#SearchAPI-DownloadingtheLatestVersionofanAsset but no result is returned, myartefact.zip is empty.
However with the same URL, my artefact is downloaded from a web browser or with gradle.
With curl the following command line is working fine, returning the list of all snapshot versions of my artefact:
curl -u username:password -X GET "https://mynexusserver/service/rest/v1/search/assets?sort=version&repository=snapshotsrepo&maven.groupId=mygroup&maven.artefactId=myartefact&maven.extension=zip" -H "accept: application/json" -o myartefact.zip
Downloading the artefact directly is working fine as well with a command line like:
curl -u username:password "https://mynexusserver/repository/snapshotsrepo/mygroup/batchfactory/myversion-SNAPSHOT/myartefact-myversion-mytimestamp.zip" -H "accept: application/json" -o myartefact.zip
Verbose logs (-v option) show the artefact is found (I get HTTP/1.1 302 Found message) but nothing is downloaded.
Using wget doesn't work any better, I can't even query the list of snapshot version of the artefact.
Am I missing something?
Thanks #Zeitounator, after adding "-L" the command line works fine. Considering code 302 this feels obvious now...
Nexus documentation should also probably be updated to add the "-L" option.

Using cURL with large data files

I have this cURL command to run:
curl -X POST -H "Content-Type: application/json"
-u "{username}":"{password}"
"https://gateway.watsonplatform.net/retrieve-and-rank/api/v1/solr_clusters/{solr_cluster_id}/solr/example_collection/update"
--data-binary #./data.json
Where data.json is a 60 GB file. I ran it, and my computer bluescreened after reaching 100% memory usage. I assume by that that cURL reads the entire file into memory before sending it.
Obviously, with 16GB of RAM I won't be able to do that. So, my question is: is there a way to use cURL where I could send a huge file like that? And if not, is there an alternative way to accomplish the same thing that cURL does?

Algolia: Delete multiple records from dashboard

How can I delete multiple records at once? Is it possible to select all, say "products" post_type, and delete it or it has to be one by one? (I'm not trying to clear all the records)
Algolia's dashboard is not designed to be a complete graphical interface on top of the API, it's mostly here for convenience, understanding and testing purposes, not complete management of the data.
As soon as you start being limited by the dashboard, you should probably write a small script to achieve what you're trying to do.
Fortunately, it's been designed to be as easy as possible.
With PHP, here's how it would look like:
First, let's create a small folder to hold the script.
mkdir /tmp/clear-algolia && cd /tmp/clear-algolia
If you don't have composer yet, you can simply install it in the current folder by launching the commands described here.
If you've just installed it and want to just use it just for this session:
alias composer=php composer.phar
Then install Algolia using composer:
composer require algolia/algoliasearch-client-php
Write a small script along those lines:
<?php
// removeSpecific.php
require __DIR__ . '/vendor/autoload.php';
$client = new \AlgoliaSearch\Client("YOUR_APP_ID", "YOUR_ADMIN_API_KEY");
$index = $client->initIndex('YOUR_INDEX');
$index->deleteByQuery('', [ 'filters' => 'post_type:products' ]);
?>
Then run it:
php removeSpecific.php
And you're good to go! Next time you want to do an operation on your index, you'll only have to change the last line of the script to achieve what you want.
You can use the REST API.
It can be easier or faster to do it with PostMan.
Here you can check a simple request: https://www.algolia.com/doc/rest-api/search/#delete-by
To first check what you are deleting, you can use:
curl --location --request POST 'https://[AplicationID]-
dsn.algolia.net/1/indexes/[IndexName]/query' \
--header 'X-Algolia-Application-Id: XXXXXXXXXXXX' \
--header 'X-Algolia-API-Key: XXXXXXXXXXXXXXXXXXXXXXXX' \
--header 'Content-Type: application/json' \
--data-raw '{
"params":"numericFilters=id<=9000"
}'
And to delete the records you can use:
curl --location --request POST
'https://[AplicationID].algolia.net/1/indexes/[IndexName]/deleteByQuery' \
--header 'X-Algolia-Application-Id: XXXXXXXXXXXX' \
--header 'X-Algolia-API-Key: XXXXXXXXXXXXXXXXXXXXX' \
--header 'Content-Type: application/json' \
--data-raw '{
"params":"numericFilters=id<=8000"
}'
The "params" should receive a Search Parameter, you can find a list here: https://www.algolia.com/doc/api-reference/search-api-parameters/