I have a Parse Server running on top of a MongoDB, and that's running 100% fine on my Dev Server which is hosted on DigitalOcean. Here I'm able to send GET requests to my server to obtain the image, as well as access the image via it's Parse-Dashboard.
I cloned that droplet to set up a Production Server, and everything is running fine... Except, I can't access the images from Parse that were either cloned from the Dev Server, or ones that I uploaded after I initialized the new Production server. I'm able to send GET requests to obtain all other fields, except for the image files. I also can't access the image file via the Parse-Dashboard - it returns a 404 - Oh no, we can't find that page! error, on the following URL: http://server.ip/parse/files/ProdServer/de632aeb61f7265926e554fabfb25180_image1.png
Other key things to note:
The Dev Server is hosted on a domain that has a SSL; could it be an SSL issue?
I'm initializing the parse-dashboard with the --allowInsecureHTTP flag
Everything (even before the SSL) was working on the Dev Server beforehand
all packages + dependencies are up-to-date
tl;dr
How do I access the image files from my Parse Server, via Parse-Dashboard or GET request?
A couple methods I tried... Since this was an elaborate process for me, allow me to document the methods I tried to resolve this issue:
The first issue was, do the files exist? If so, where are they stored?
By accessing my parse-dashboard on port 4040, I tried to view the image path via the URL... So I knew it existed somewhere, and I recursively searched my entire server for the file path, but to no avail.
Then with more research I found that any file over 16MB gets converted into a GridFS object i.e. images are stored in my MongoDB. How you access these objects are through a utility called mongofiles.
By running mongofiles -d dbname list I was able to view in a list view all of the images stored on my Parse-Server.
just to ensure the images weren't corrupt...
I also sftp the images over into my local machine, and fortunately I could view them. So the problem was that the images weren't being served correctly...
The next issue was, how come the images aren't being served correctly?
So my parse-dashboard was being served on port 4040, but for some reason, my image file path within their respective URLs were being prefaced with the same port 4040... It turns out that within my Parse-Server config, the parse-server URL was pointing to port 4040, but was being served on ****. By changing my URL back to ****, my images were able to properly render on my parse-dashboard, and I was able to send http requests for the images as well :)
tl;dr make sure your image file path is being served on the same port where your parse-server is being served
Related
We let the golem package automatically create a Dockerfile for us and can run the docker image and see the app at the root directory: http://localhost:3838/?...
But we would like the app to appear in a subdirectory like http://localhost:3838/myApp/v1/?... so that we can set up the necessary proxying for Apache and have this and other apps all available from a single server.
We can manually edit the Dockerfile to copy a shiny-server.conf file with the following information:
# Define a server that listens on port 3838
server {
listen 3838;
# Define a location at the base URL
location /myApp/v1/ {
# Host the directory of Shiny Apps stored in this directory
site_dir /srv/shiny-server;
# Log all Shiny output to files in this directory
log_dir /var/log/shiny-server;
}
}
The above solution feels like a hack and we are hoping there is functionality inside of golem that will allow us to set the subdirectory at which the app will appear.
Unfortunately there is no way to include an nginx configuration inside the Dockerfile programmatically: {golem} tries to help with the creation of the file, but some things still need to be done manually.
Also, note that {golem} doesn't create a Dockerfile with a shiny server in it, it creates a standalone docker image that launches the app, so there is no shiny server running, just an R process. {shiny} being what it is, there is no way to natively run it on a given path, it's always at the root, on a port.
That being said, what you can do is either edit the dockerfile so that it also bundle nginx (or any other load balancer), so that you can serve the app on a path, or serve your application on another port, using the port argument of add_dockerfile(): that might be easier to configure it with you Apache proxy.
Colin
I'm trying to make a Netlify app that posts data to an Atlas MongoDB, and while I can post to the DB when I run my page from localhost, Netlify is returning a 404 whenever I attempt to post data to the DB. I know it is not an issue with Atlas's whitelisted IP addresses because I have whitelisted all IP addresses for the time being. I suspect that this has something to do with Netlify not properly reading or running the env.process that I'm using to store my Atlas information, although I am not completely certain that is the cause. When I run it locally, I have my config set up to simply use the Atlas information directly rather than relying on a .env file. I'm using mongoose to connect to the DB, and the connection portion of my code is the following in my production build:
mongoose.connect(process.env.MONGODB_URI || "mongodb://localhost/dbname");
This has not been working, but on the working copy that I run from localhost, I use:
const uri = `mongodb://atlasDB:<PASSWORDHERE>#atlasDB-shard-00-00-ot2tv.mongodb.net:27017,atlasDB-shard-00-01-ot2tv.mongodb.net:27017,atlasDB-shard-00-02-ot2tv.mongodb.net:27017/test?ssl=true&replicaSet=atlasDB-shard-0&authSource=admin&retryWrites=true`;
mongoose.connect(uri);
I have configured Netlify to have a MONGODB_URI build environmental variable of mongodb://atlasDB:<PASSWORDHERE>#atlasDB-shard-00-00-ot2tv.mongodb.net:27017,atlasDB-shard-00-01-ot2tv.mongodb.net:27017,atlasDB-shard-00-02-ot2tv.mongodb.net:27017/test?ssl=true&replicaSet=atlasDB-shard-0&authSource=admin&retryWrites=true
I have replaced PASSWORDHERE with the actual password in both instances, but the Netlify build environmental variable does not feature string quotations around the value when viewed in the entry field on the Netlify website. I tried putting them in, but it seemed to make no difference, but I may have simply not waiting long enough for the change to take effect.
Aside from Mongoose, I am not running any other dependencies that should have any effect on this problem. The project deadline is in a couple days, so any help with this would be greatly appreciated.
I'm trying to setup my own instance of nextcloud on my server but I'm running into a problem as I want nextcloud to be available under https://example.com/cloud/.
Next cloud is running in a CoreOS virtual machine called let's say myvm.
So this is the way I setup my CaddyFile:
example.com {
gzip
proxy /cloud myvm:8080 {
transparent
without /cloud
}
}
I have other proxies that work fine for other services or VMs that are written similarily.
With this, and publishing port 8080 in my docker-compose file, I manage to connect to the nextcloud instance. But every time I go to example.com/cloud/ it will redirect me to example.com/apps/files/ instead of example.com/cloud/apps/files/.
If I enter this last url manually, I can access to nextcloud, but also the page doesn't load properly because all the contents cannot be loaded because they are not prompted with the prefix cloud/.
Is there a way to explain nextcloud about this prefix through the configuration of docker-compose file? (It's the only configuration I created, it works with just that and no extra work, I use one similar to the one available here (the apache one).)
Or maybe I can improve the CaddyFile config? (By the way, if I don't use the without option, it will just not work at all and return 404 when I go to the url).
I have, in order to process some big data, to set up ckan on a local machine. I've set up the whole system following this guide : http://docs.ckan.org/en/latest/maintaining/installing/install-from-source.html
I wanted to display a preview of a locally loaded file, so the user can actually see it before downloading it. And it doesn't work, because it only works for online files. For instance, it DOES work with this online file but NOT with my own file I upload.
So, I've been interested about Datastore and Datapusher. I've followed every part of the guide, and it appears on my ckan. However, I have an error. Specifically this one :
Upload error: An Error occurred while sending the job: 403 Client Error: Forbidden for url: http://127.0.0.1:8800/job
Here's my most important parts about my production.ini file (copying the whole would be very long) :
ckan.site_url = http://localhost
ckan.plugins = datastore datapusher stats text_view image_view
recline_view recline_graph_view recline_map_view webpage_view
ckan.datapusher.formats = csv xls xlsx tsv application/csv
application/vnd.ms-excel
application/vnd.openxmlformats-officedocument.spreadsheetml.sheet
ckan.datapusher.url = http://127.0.0.1:8800/
I truly have no idea about what my problem could be, I tried to change the datapusher.url to 0.0.0.0 as the guide suggested, but it doesn't work either.
If the data to be added to CKAN is in a file on your computer, select “Upload a file” option. CKAN will give you a file browser to select it. You should use link to a file option just for publicly available resources.
Have you installed datapusher also? Its a separate process running on port 8800. CKAN uses datastore to be able to have a grid view of tabular data. Data needs to be pushed through datapusher to be used by datastore.
Yes, you need to set up the Datapusher.It's not activated by default.
Pull the datapusher code, install the dependencies and run it using:
python datapusher/main.py deployment/settings.py
The instructions to configure the settings are on the repository.
Here's the datapusher manual: http://docs.ckan.org/projects/datapusher/en/latest/
Here's the repository: https://github.com/ckan/datapusher
Had the exact same error message.
This post solved my issue though.
short: insert/check the following in your virtualhost in /etc/apache2/sites-enabled/datapusher.conf
<Directory /etc/ckan>
Options All
AllowOverride All
Require all granted
</Directory>
I'm dealing with an annoying problem. I have to make some changes to a large website, which source code is not under my control (sub-contracting). As usually I try to rebuild a local copy of the site to test my changes. The problem is now that almost all paths used in URLs for images, css, links etc. are relative paths pointing to the root directory, like
href="/style/main.css"
This is a problem because I develop on an intranet server and I put this project into a nested directory, so the URL to the project files is sth. like
http://myIntranet.com/checkout/project
What happens is that the paths from the first example don't resolve correctly. So I tried using the base tag to set the directory from which links should be resolved as
That works fine when the path is
href="style/main.css"
without the slash at the start, but fails when the slash exists, because (I think) the link is resolved from the server host, not from the URI in the tag.
So... is there any possibility to make the "/dir/file.html" links resolve from a root othe than the server root? Or do I have to manually remove all prepending slashes from the paths (urgh)?
Thanks in advance. :)
If you're doing local development on a web site you can do either of the below. Both involve moving your project in a base folder instead of working with sub folders inside your document root.
Virtual host on different port
In your web server, create another listening port and virtual host.
After restart, you can access your web server as http://localhost:81 or whatever port number you choose.
Virtual host on same port
Only create another virtual host (like above), but make sure to use named virtual hosting.
After restart, you have to add another entry in your hosts file (c:\windows\system32\drivers\etc\hosts or /etc/hosts) using a simple text editor:
127.0.0.1 localhost myproject1.self.com
The above line should already exist, so you can keep adding more names to it:
127.0.0.1 localhost myproject1.self.com myproject2.self.com
Personal preference
I like the second option, because I don't have to mess with ports and stuff like Facebook API keeps working as you expect.
I hope this all makes sense, let me know otherwise.