I have a debian 6.0.7 (squeeze) server, and a Google Compute Engine Centos 7 instance, Now I have sites hosted on the GCE for example blablabla.com, I need to configure my local debian to be able to receive and send emails for my blablabla.com site on the GCE so as it appears as if blablabla.com is hosted on my local debian server, Any help on how to achieve this please
That seems that is not an usual configuration, but in this document you can find some options that can be implemented within GCE instances, but keep in mind that port 25 is not open.
Related
I have MongoDB installed on my desktop, using which I have developed a web application. I want to now deploy this webapp on an EC2 linux instance and test it. I don't want to setup another mongoDB on the EC2 separately, rather use the one on my desktop. I understand that it is not that simple to put my ip and mongo port and just connect.
I have added port-forwarding settings on my router like this -
Also I have opened my firewall for this port by adding an inbound rule.
Yet I'm not able to connect. What am I missing here?
Thanks in advance
Over the years, I used No-IP to link a domain to my IP address, and then used No-IP's DUC (Dynamic Update Client) to update my IP, so that the domain will always point to my IP.
That's very handy for running dedicated game servers.
Is there a DUC-equivalent for Google Cloud DNS?
In essence - No - there isn't :(
Unless yo're using Google Domains for your domain hosting then yes - they support just the thing.
Cloud DNS doesn't have that functionality. There are several workarounds like reserving a public IP for your VM which in my opinion would be the best way to do it. Unless your VM get's deployed using Deployment Manager then it may require some more scripting.
Similar questions have been raised on Stackoverflow here and here which you might find helpful.
If you're running Linux here you'll find a complete script how to update DNS records after a machine startup.
I'm a new comer to using the overseas server. Recently I bought a vps from virmach in order to see foreign websites like google and wiki.
I've been trying for a long time configuring my shadowsocks on my server.
However, when I was using shadowsocks-qt5 to connect my server, it was timeout.
And of course I can't access google correctly.
What I want to ask is the reason why I failed.
Here are things that I do remember to do:
stop the firewall on both computers;
build the .json file which I referred to blogs in China.
Here are the outline of my shadowsocks.json on my server:
{
"server":"0.0.0.0",
"server_port":8388,
"local_address":"127.0.0.1",
"local_port":1080,
"password":"XXXX",
"timeout":600,
"method":"aes-256-cfb"
}
Other useful(maybe) information:
my client OS version: Ubuntu 18.04.3 LTS
my server OS version: Ubuntu 16.04.6 LTS
the client I choose is from: https://github.com/shadowsocks/shadowsocks-qt5
I could not help but wandered, are there any other possible reasons I've forgot? Can anyone inform me some helpful details to solve this puzzling problems? Thanks a lot!
I have not set up my own VPS but I have instead subscribed to the server provided by caonima.io, so I can't speak for any server related issues. Additionally, I have no affiliation with caonima.io. I did however successfully set up my client on Ubuntu 16.04 after having some issues connecting to GFW-blocked (China's Great FireWall) websites.
From what I understand from my solution, the client configuration is NOT the only step of setup. There are two layers of proxy access that need to be completed:
Client Configuration. Configure your client with the server and connection information. A successful connection looked like this for me with my command line interface
shadowsocks-libev command line client successful connection
System or Browser Proxy Configuration. You will need to configure either your browser or web access tool to use a proxy, or set system-wide proxy settings. To set system wide proxy settings, go to system settings > network > network proxy and enter the proxy information. Setting Socks host to localhost:1080 resulted in successful GFW-blocked website access (as shown below)!
Ubuntu network settings proxy manual configuration
To a non-developer this installation guide is very hard to get through. I'm on Mac OS X 10.6, I have apache and ejabberd installed (which is the bare minumum it says).
I have a few questions to get it up and running.
Where do I clone the git repository to on my computer? My desktop, /Users/fred/sites folder, or somewhere else?
are my HTTP DOMAIN and XMPP DOMAIN macpro.local (my local address) or localhost or something else?
For development purposes, I've been successfully using the ejabberd TurnKey Linux Appliance that combines ejabberd with Speeqe.
I'm not sure you're still interested, but the domains are going to have to be the network visible ones.
Your xmpp domain is how you would log in to the ejabberd server eg:
user#yourxmppservername.com if it is your local machine this might just be your ip address, but not macpro.local because then only your computer would be able to see it.
I want to be able to run an EC2 instance (CentOS LAMP based) as a mail server and create email addresses for users when they sign up so that they can upload files via email. The emails would be parsed and attached files processed and added to S3 for storage.
Is this feasible?
What mail package would I need for this?
I would like to be able to create email address such as username#uploads.domainname.com
my domainname points to a webserver not on amazon web services so I realise this may not be possible.
where do I start with this, are there any good resources for setting up a mail server on EC2
many thanks
To answer the question, yes it is possible. As paul says, if you require 24x7 and long term then EC2 may be more expensive than some other providers. But it can make sense if you're a startup or if you're doing this to learn more about these topics.
Basic steps would be:
Create a linux EC2 Instance http://docs.amazonwebservices.com/AWSEC2/2008-02-01/GettingStartedGuide/?ref=get-started
Install a mail package http://flurdy.com/docs/postfix/
Change your DNS MX record http://en.wikipedia.org/wiki/MX_record
Amazon has had trouble in the past with blacklisting.. but they're trying to address that. Read here: http://developer.amazonwebservices.com/connect/thread.jspa?threadID=37650
Edit: You could also use a pre-configured CentOS image (combining step 1+2), this one has postfix already installed: http://developer.amazonwebservices.com/connect/entry.jspa?externalID=821
Using EC2 as a mail server does not seem like a good fit to me. You're not using either the "Elastic" or the "Cloud" part of the "Elastic Compute Cloud". You need something that has to be up 24x7, has the same IP all the time, and doesn't need to expand or contract on demand, so a VPS would be a better solution.
It can probably be done with the use of an elastic IP along with the correct configuration of the mail server on the ec2 instance to receive mails.
However, it might be easier to use Google AppEngine. You can forward the messages from username#uploads.domainname.com from your existing mail server to your appspot email address, then process the messages and store the files on S3 with a some appengine code in python. See the appengine documentation on receiving email for more information: http://code.google.com/appengine/docs/python/mail/overview.html