I recently migrate my parse.com app to from parse servers to amazon AWS using bitnami parse server and mLab.
The migration works OK and the only problem i'm having is with the cloud code.
I edit the main.js located in /bitnami/apps/parse/htdocs/spec/cloud
but i don't know how to deploy the file.
how do i update the cloud code file after editing?
I tried: forever stop 0
forever start server.js, but it didn't work.
thanks in advance.
Check out the Bitnami Parse Server documentation:
https://wiki.bitnami.com/Applications/Bitnami_Parse_Server#How_to_start.2fstop_the_servers.3f
After making changes to main.js just run
sudo /opt/bitnami/ctlscript.sh restart parse
to restart the Parse server.
Related
I want to deploy Kong with DB mode on the Azure App Service using the docker-compose file.
I have the following docker-compose.yml file:
https://github.com/saurabhmasc/kong-compose/blob/main/docker-compose.yml
This file is running properly in Doker Desktop.
But when I tried to run it in Azure app services it was not running.
Please gives us suggestions.
Thanks
Saurabh
I am trying to set up a local cluster in neo4j and following this tutorial (https://neo4j.com/docs/operations-manual/current/tutorial/local-causal-cluster/#tutorial-local-cluster-configure-cores). I have downloaded the latest enterprise version 4.1.1 and have followed all the steps mentioned in the tutorial. However I am confused at one step where it is mentioned that we should move to bin directory and run command "neo4j start" . When I run this command it says neo4j service not found so I searched about it and found that I should install neo4j as a service first then the start command will start the service. When I install it the service will run for the core-01 instance, following the tutorial for core-02 and core-03 I am supposed to start neo4j also but again the start command will not work unless I install it.
Following this am I supposed to install 3 different services or it is supposed to be a single service for the whole cluster of 3 instances? If single then the neo4j service will always points to the core-01 instance.
Skipping start command If I run the command neo4j console in the bin directory of 3 instances then I get this error in all three consoles
How I am supposed to handle this?? Am I not setting neo4j service properly or there is some issue in configuration ?
I have a Docker container using Strapi (which used MondoDB) on a now defunct AWS EC2. I need the content off that server - it can't run because it's too full. So i've tried to retrieve all the files using SCP - which worked a treat apart from download the database content (the actual stuff i need - Strapi and docker book up fine, but because it has to database content, it treats it as a new instance).
Every time i try to download the contents on db from AWS i get 'permission denied'
I'm using SCP something like this
scp -i /directory/to/***.pem -r user#ec2-xx-xx-xxx-xxx.compute-1.amazonaws.com:strapi-docker/* /your/local/directory/files/to/download
Does anyone know how i can get this entire docker container running locally with the database content?
You can temporarily change permissions (recursively) on the directory in question to be world-readable using chmod.
I installed successfully ddev for TYPO3 and now want to connect to the mariadb database. But what are the credentials? If I ssh into the container and want to connect I got a password prompt.
Access via external tools is described in Using Developer Tools with ddev.
Specifically you need to execute the following command to get the necessary credentials:
ddev describe
When upgrading my ddev and deleting all the containers, everything stayed the same except my new port number incremented up by one.
mariadb
Host: localhost:portNumberIncrementedByOne
User/Pass: 'db/db'
I'm using a Chromebook Pixel 2, and it's easier to get Rocket working than Docker. I recently installed Rocket 1.1 into /usr/local/bin, and have a clone of the Kubernetes GitHub repo.
When I try to use ./hack/local-up-cluster.sh to start a cluster, it eventually fails with this message:
Failed to successfully run 'docker ps', please verify that docker is installed and $DOCKER_HOST is set correctly.
According to the docs, k8s supports Rocket. Can someone please guide me about how to start a local cluster without a working Docker installation?
Thanks in advance.
You need to set three environment variables before running ./hack/local-up-cluster.h:
$ export CONTAINER_RUNTIME=rkt
$ export RKT_PATH=$PATH_TO_RKT_BINARY
$ export RKT_STAGE1_IMAGE=PATH=$PATH_TO_STAGE1_IMAGE
This is described in the docs for getting started with a local rkt cluster.
Try running export CONTAINER_RUNTIME="rocket" and then re-running the script.