Unable to run Algolia Docsearch in my docusaurus v2 website - algolia

I have received my algolia apiKey and the indexName, I have correctly added them under themeConfig in my docusauras.config.js file
algolia: {
apiKey: 'API_KEY',
indexName: 'INDEX_NAME',
},
However, my docsearch is not working. I have applied for docsearch via the free tier.
As per Docusauras docs I need to add only the apiKey and indexName received to get it working which I did but it's not working.(only the loading indicator appears nothing else). Please Help.

I was running my site locally after integrating the docsearch(I didn't push the changes to my live website, silly I know). It seems algolia was unable to crawl my site hosted on the local server(obviously because crawler was configured to run on my domain provided). However, pushing the changes live solved the issue.
Don't forget to push the changes live after integrating algolia.

Related

Netlify deploy can't connect to Heroku backend

I've built a wee program that works fine when I run it locally. I've deployed the backend to Heroku, and I can access that either by going straight to the URL (http://gymbud-tracker.herokuapp.com/users) or when running the frontend locally. So far so good.
However, when I run npm run-script build and deploy it to Netlify, something goes wrong, and any attempt to access the server gives me the following error in the console:
auth.js:37 Error: Network Error
at e.exports (createError.js:16)
at XMLHttpRequest.p.onerror (xhr.js:99)
The action that is pushing that error is the following, if it is relevant:
export const signin = (formData, history) => async (dispatch) => {
try {
const { data } = await api.signIn(formData);
dispatch({ type: AUTH, data });
history.push("../signedin");
} catch (error) {
console.log(error);
}
};
I've been tearing my hair out trying to work out what is changing when I build and deploy, but cannot work it out.
As I say, if I run the front end locally then it access the Heroku backend no problem - no errors, and working exactly as I'd expect. The API call is correct, I believe: const API = axios.create({baseURL: 'http://gymbud-tracker.herokuapp.com/' });
I wondered if it was an issue with network access to the MongoDB database that Heroku is linked to, but it's open to "0.0.0.0/0" (I've not taken any security precautions yet, don't kill me!). The MDB database is actually in the same collection as other projects I've used, that haven't had this problem at all.
Any ideas what I need to do?
The front end is live here: https://gym-bud.netlify.app/
And the whole thing is deployed to GitHub here: https://github.com/gordonmaloney/gymbud
Your issue is CORS (Cross-Origin Resource Sharing). When I visit your site and inspect the page, I see the following error in the JavaScript console which is how I know this:
This error essentially means that your public-facing application (running live on Netlify) is trying to make an HTTP request from your JavaScript front-end to your Heroku backend deployed on a different domain.
CORS dictates which requests from a frontend application are ALLOWED to make a request to your backend API.
What you need to do to fix this is to modify your Heroku application and have it return the appropriate Access-Control-Allow-Origin header. This article on MDN explains the header and how you can use it.
Here's a simple example of the header you could set on your Heroku backend to allow this to work:
Access-Control-Allow-Origin: *
Please be sure to read the MDN documentation, however, as this example will allow any front-end application to make requests to your Heroku backend when in reality, you'll likely want to restrict it to just the front-end domains you build.
God I feel so daft, but at least I've worked it out.
I looked at the console on a different browser (Edge), and it said it was blocking it because it was mixed origin, and I realised I had just missed out the s in the https on my API call, so it wasn't actually a cors issue (I don't think?), but just a typo on my part!
So I changed:
const API = axios.create({baseURL: 'http://gymbud-tracker.herokuapp.com' });
To this:
const API = axios.create({baseURL: 'https://gymbud-tracker.herokuapp.com' });
And now it is working perfectly ☺️
Thanks for your help! Even if it wasn't the issue here, I've definitely learned a lot more about cors on the way, so that's good

How to make Amplify CloudFormation aware of changes made outside of it

I ended up on a point that Amplify fails to push any change I made, with a non existent UserPool clientId exception.
Something like
Resource Name: XXXXXXXXXXX (AWS::Cognito::UserPoolClient) Event Type:
update Reason: User pool client does not exist. (Service:
AWSCognitoIdentityProviderService; Status Code: 400; Error Code:
ResourceNotFoundException; Request ID: YYYYYYYYYYYYYYYYYY URL:
https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/xxxxxxxxxxx
I have explained my whole journey on a Github issue for Amplify Cli that you can see here, unfortunately, I'm not getting much support from Amplify team, as you can see there.
I also have created a StackOverflow question with the initial problem I was facing, that you can check here
After digging more into this issue for 3-4 long days, as this issue is blocking my deployment, I came to a guess to what happened:
I have added auth to my amplify project months ago
Eventually, I noticed one of the created clients were not being used, so I have deleted it, using the Cognito console.
I had not updated the auth during months
Now that I have introduced the social authentication Amplify tried to update it and because of the client Id not existing anymore, it can't and raises the mentioned error.
Now, anything I try to update it fails, and I guess the reason is this out of sync between what Amplify expects and what actually is the infra.
Every time I pull --restore my environment, I get my amplify-meta.json updated with this invalid client Id (and yes, I have tried changing it on the local amplify-meta.json and pushing it), something like:
"auth": {
"myproject": {
"service": "Cognito",
"providerPlugin": "awscloudformation",
"output": {
"GoogleWebClient": "111111111.apps.googleusercontent.com",
"AppClientSecret": "aaaaaaaaaaa",
"UserPoolId": "region-pooId",
"AppClientIDWeb": "VALID ID",
"AppClientID": "INVALID ID",
"FacebookWebClient": "2222222222",
"IdentityPoolId": "region:Id",
"IdentityPoolName": "myproject__env",
"UserPoolName": "mypoolname"
},
"lastPushTimeStamp": "2020-05-13T20:48:29.797Z",
"providerMetadata": {
"s3TemplateURL": "https://s3.amazonaws.com/myproject-deployment/amplify-cfn-templates/auth/lexis-cloudformation-template.yml",
"logicalId": "authmyproject"
},
"lastPushDirHash": "XXXXXXXXXXXXXX="
}
},
I have a different valid ClientId on my Cognito, so on my last resort, what I have tried is going direct to the S3TemplateURL pointed on this code and updating it there to the valid one, my guess was that this file was the single point of truth for Amplify.
But no success, still getting the same wrong Id after pull restore.
Any idea how can I make Amplify in sync again? Making it aware that this ClientId doesn't exist anymore and just getting rid of it on the CloudFormation/Templates?
Amplify Cli is not supporting this feature.
I had the same problem.
I updated Appsync and Cognitor in the cloud and I cannot pull the changes to my project.
When I run amplify status, it said no changes.
So I contacted AWS support and they said this is coming feature.
The solution is to change everything in amplify cli and manage amplify in the console. Don't change anything in the cloud.

Netlify returning 404 when connecting to Atlas MongoDB

I'm trying to make a Netlify app that posts data to an Atlas MongoDB, and while I can post to the DB when I run my page from localhost, Netlify is returning a 404 whenever I attempt to post data to the DB. I know it is not an issue with Atlas's whitelisted IP addresses because I have whitelisted all IP addresses for the time being. I suspect that this has something to do with Netlify not properly reading or running the env.process that I'm using to store my Atlas information, although I am not completely certain that is the cause. When I run it locally, I have my config set up to simply use the Atlas information directly rather than relying on a .env file. I'm using mongoose to connect to the DB, and the connection portion of my code is the following in my production build:
mongoose.connect(process.env.MONGODB_URI || "mongodb://localhost/dbname");
This has not been working, but on the working copy that I run from localhost, I use:
const uri = `mongodb://atlasDB:<PASSWORDHERE>#atlasDB-shard-00-00-ot2tv.mongodb.net:27017,atlasDB-shard-00-01-ot2tv.mongodb.net:27017,atlasDB-shard-00-02-ot2tv.mongodb.net:27017/test?ssl=true&replicaSet=atlasDB-shard-0&authSource=admin&retryWrites=true`;
mongoose.connect(uri);
I have configured Netlify to have a MONGODB_URI build environmental variable of mongodb://atlasDB:<PASSWORDHERE>#atlasDB-shard-00-00-ot2tv.mongodb.net:27017,atlasDB-shard-00-01-ot2tv.mongodb.net:27017,atlasDB-shard-00-02-ot2tv.mongodb.net:27017/test?ssl=true&replicaSet=atlasDB-shard-0&authSource=admin&retryWrites=true
I have replaced PASSWORDHERE with the actual password in both instances, but the Netlify build environmental variable does not feature string quotations around the value when viewed in the entry field on the Netlify website. I tried putting them in, but it seemed to make no difference, but I may have simply not waiting long enough for the change to take effect.
Aside from Mongoose, I am not running any other dependencies that should have any effect on this problem. The project deadline is in a couple days, so any help with this would be greatly appreciated.

How do I Re-route Ghost Blog Admin URL without modifying the API Address?

Ghost blog platform has a setting that allows you to change the admin panel login location (which starts as: https://whateveryoursiteis.com/ghost). Methodology / docs for changing that setting can be found here: https://ghost.org/docs/config/#admin-url
However — when using the above methodology the API Url that is used for Search etc etc is ALSO modified meaning all requests to the ghost API will also be forwarded to the alternate domain (not just the admin access).
My question is — what is the best way to achieve a redirect of the admin URL to a different Domain / protocol while allowing the API url used by Ghost to remain the same?
More background.
We are running ghost on top of GKE (Google Kubernetes Engine) on a Multi-Region Ingress which allows us to dump our CloudSQL DB down to a SQLite file and then build that database into our production Docker Containers which are then deployed to the different Kubernetes nodes that are fronted by the GCE-Ingress load balancer.
Since we need to rebuild that database / container on content change (not just on code change) we need to have a separate Admin URL backed by Cloud SQL where we can persist / modify our data which then triggers the rebuild on our Ci pipeline via Ghost Webhooks.
Another related question might be:
Is it possible to use standard ghost redirects (created via: https://docs.ghost.org/concepts/redirects/) to redirect the admin panel URL (ie. https://whateveryoursiteis.com/ghost) to a different domain (ie. https://youradminsite.com/ghost)?
Another Related GKE / GCE-Ingress Question:
Is it possible to create 301 redirects natively using Kuberentes GCE-Ingress on GKE without adding an nGinx container etc?
That will be my first attempt after posting this — but I figured either way maybe it helps another ghost platform fan down the line someplace — I will attempt to respond back as I find answers to those questions (assuming someone doesn't beat me to it!).
Regarding your question if it's possible to create 301 redirects without adding a nginx container, I can suggest to use istio, find out more information about traffic routing here.
OK. So as it turns out the Ghost team currently has things setup to point API connections at the Admin URL. So if you change your Admin URL expect your clients to attempt to connect to that URL.
I am going to be raising the potential of splitting these off as a feature request over on the ghost forums (as soon as I get out from under pre-launch hell on the current project).
Here's the official Ghost response:
What is referred as 'official docker image' is not something that we
as a Ghost team support.
The APIs are indeed hosted under the same URL as the admin and that's
by design and not really a bug. Introducing configuration options for
each API Ghost instance hosts would be a feature and should be
discussed at our forum first 👍 I think it's a nice idea to be able to
serve APIs from different host, but it's not something that is within
our priorities at the moment.
In case you need more granular handling of admin site, you could
introduce those on your proxy level and for example, handle requests
that are coming to /ghost/api with a different set of rules.
See the full discussion over here on the TryGhost GitHub:
https://github.com/TryGhost/Ghost/issues/10441#issuecomment-460378033
I haven't looked into what it would take to implement the feature but the suggestion on proxying the request could work... if only I didn't need to run on GKE Multi region (which requires use of GCE-Ingress which doesn't have support for redirection hah!). This would be relatively easy to solve the nGinx ingress.
Hopefully this helps someone — I will update as I work through the process. As of now I solved it by dumping my GCP CloudSQL database down to a SQLite db file during build time (thereby allowing me to keep my admin instance clean and separate from the API endpoint — which for me remains the same URL).

What's the cause of "Failed to preload gadget..." for Sharebox gadgets in IBM Connections

I've followed the procedure documented at "Adding new ways to share content"
...but keep getting an error:
Failed to preload gadget http://....
Detailed error: 400 Gadget is not trusted to render in this container. cre.iruntime:cre.iwidget.event:cre.wire:cre.iwidget:cre.iwidget.itemset:cre….ibm.connections.ee:ibm.connections.ee:container.nongadget:open-views.js:4
sharebox error http://i7.minus.com/ibiLz4SSWA5EL8.png
This looks like some sort of trust problem with external servers, but my other gadgets (embedded experience & home page gadgets) on the same external host are all working fine.
What have I missed out in the configuration?
OK, shamefully, it looks like I missed out swapping the security attribute whitelistEnabled="true" to whitelistEnabled="false" in:
/opt/IBM/WebSphere/AppServer/profiles/Dmgr01/config/cells/connectionswwCell01/LotusConnections-config/opensocial-config.xml
Here:
<security whitelistEnabled="false" featureAdminEnabled="true">
More details in this slide: How to add your own OpenSocial Gadgets to IBM Connections.
Of course, in a production system, you'll have to checkout the opensocial config using wsadmin.sh, edit, checkin & restart.