I have a medium size site (50 pages) with about half the pages being static. I would like to give my client access to edit the static pages and was considering either PageLime or SimpleCMS. But, security is a concern for my client. Can anyone comment on the security of either of these CMS's? I'm a little concerned as well, because the site's FTP credentials are stored on the CMS's site.
Thanks for any input.
Related
I'm working in creating a FORM (kind of survey) to get user input, any user who visited the website can provide me information. This would means, the API is actually public accessible to anyone without any token or session (basically nothing)
I want to prevent people from getting my endpoint and create thousand/millions of requests (SPAM) to flood my service and database. I've tried to look over Stackoverflow and some post in medium, it's interesting that I don't find much about this.
Some said:
bundle my website as an hybrid app, supply accessToken to only "trusted device" for firing my api (but this is a pure webapp)
creating custom header and identify the header from my web server (hmm..?)
use CAPTCHA (this will only stop people spamming via the GUI, but spamming via script still possible)
Is this simply no better way to secure, since it's public?
Any thoughts to share?
have you read about Request Queue?
it might help you solve the issue
source 1
source 2
We have a corporate website that receives external email, processes them, and shows them in the browser to the user. We will be showing the emails in HTML format if they are available in this format. However, this basically means that we will be showing user-generated HTML code (you could send any HTML in an email, as far as I know).
What are the security risks here? What steps to take in order to minimize these risks?
I can currently think of:
Removing all javascript
Perhaps removing external CSS? Not sure if this is a security risk
Not loading images (to limit tracking... not sure if this poses a security risk or just a privacy risk)
Would that be all? Removing HTML tags is always error prone so I am wondering if there is a better way to somehow disable external scripts when displaying e-mail?
The security risks are, as far as I know, the same as with Cross-Site-Scripting (XSS).
OWASP describes the risks as following:
XSS can cause a variety of problems for the end user that range in severity from an annoyance to complete account compromise. The most severe XSS attacks involve disclosure of the user’s session cookie, allowing an attacker to hijack the user’s session and take over the account. Other damaging attacks include the disclosure of end user files, installation of Trojan horse programs, redirect the user to some other page or site, or modify presentation of content. An XSS vulnerability allowing an attacker to modify a press release or news item could affect a company’s stock price or lessen consumer confidence.
Source
Defending against it must be with layers of defense, such as but not limited to:
Sanitizing the HTML with something like DOMPurify.
Making use of HTTP only cookies for security sensitive cookies so they can't be read from JavaScript. Source
Adding a Content Security Policy so the browser only trusts scripts from domains you tell it to trust. Source
Depending on your requirements it might also be possible to load the email content into a sandbox iframe, as an additional security measurement. This can be done like this:
var sanitizedHTML = DOMPurify('<div>...</div>');
var iframe = document.getElementById('iframeId');
iframe.srcdoc = sanitizedHTML;
I'm using the "github page" to create my personal page, but I'm going to need a hosting service because it will require some queries in the database. How can I use my GitHub Page url as a domain?
GitHub pages is not really designed for this kind of function. It's there to be a static page, where all content on the page is 'hardcoded' (meaning no dynamically generated data). What you're asking falls along the lines of a web application.
But if you're looking to be a maverick, there might be some options out there for you.
I personally haven't done something like this, but found a couple DB services you might want to check out.
Firebase by Google
RdbHost
The above recommendations may be useful if you're expecting data entry from visitors to your page. But if your data is static as well...you might be better off using s JSON file or some alternative where the data can live right in your repo.
I am trying to search and find a content from a site by using Perl Mechanize.It worked fine in the beginning after few execution i am getting 403 Forbidden instead of the search results,
$m = WWW::Mechanize->new();
$url="http://site.com/search?q=$keyword";
$m->get($url);
$c = $m->content;
print "$c";`
how can solve this problem. Please give me some suggestions.
Before beginning to scrape a site, you should make sure that you are authorized to do so. Most sites have a Terms Of Service (TOS), that lay out how you can use the site. Most sites disallow automatic access, and place strong restrictions on the intellectual property.
A site can defend against unwanted access on three levels:
Conventions: The /robots.txt almost every site has should be honored by your programs. Do not assume that a library you are using will take care of that; honoring the robots.txt is your responsibility. Here is a excerpt from the stackoverflow robots.txt:
User-Agent: *
Disallow: /ask/
Disallow: /questions/ask/
Disallow: /search/
So it seems SO doesn't like bots asking questions, or using the site search. Who would have guessed?
It is also expected that a developer will use the API and similar services to access the content. E.g. Stackoverflow has very customizable RSS feeds, has published snapshots of the database, even has an online interface for DB queries, and an API you can use.
Legal: (IANAL!) Before accessing a site for anything other than your personal, immediate consumption, you should read the TOS, or whatever they are called. They state if and how you may access the site and reuse content. Be aware that all content has some copyright. The copyright system is effectively global, so you aren't exempt from the TOS just by being in another country than the site owner.
You implicitly accept the TOS by using a site (by any means).
Some sites license their content to everybody. Good examples are Wikipedia and Stackoverflow, which license user submissions under CC-BY-SA (or rather, the submitting users license their content to the site under this license). They cannot restrict the reuse of content, but can restrict the access to that content. E.g. the Wikipedia TOS contains this a section Refraining from certain activities:
Engaging in Disruptive and Illegal Misuse of Facilities
[…]
Engaging in automated uses of the site that are abusive or disruptive of the services […]
[…] placing an undue burden on a Project website or the networks or servers connected with a Project website;
[…] traffic that suggests no serious intent to use the Project website for its stated purpose;
Knowingly accessing, […] or using any of our non-public areas in our computer systems without authorization […]
Of course, this is just meant to make disallow a DDOS, but while Bots are an important part of Wikipedia, other sites do tend to frown on them.
Technical measures: … like letting connections from an infringing IP time out, or sending a 403 error (which is very polite). Some of these measures may be automated (e.g. triggered by useragent strings, weird referrers, URL hacking, fast requests) or by watchful sysadmins tailing the logs.
If the TOS etc. don't make it clear that you may use a bot on the site, you can always ask the site owner for written permission to do so.
If you think there was a misunderstanding, and you are being blocked despite regular use of a site, you can always contact the owner/admin/webmaster and ask them re-open your access.
On my website, there is a web form that users fill out and the data collected gets e-mailed to me. Is it possible for someone to hack the data and get the users' information? Also, my site does not use a secure connection.
It depends on whether the data is logged, or flushed after being emailed.
If it is logged, then theoretically yes, a malicious user could compromise the server and access the logs.
If it isn't, there's still the possibility of your email being compromised, but at some point a line has to be drawn.
It would probably be helpful to see a specific example, or at least a little more details about exactly how this form operates.
If someone uses your site from say an internet cafe then there could be a man-in-the-middle attack where all requests go through some program sitting on the cafes server.
i think if you are worried then you should probably secure at least that page.
If you are not using SSL then its possible for someone to sniff the traffic to your server and collect all the user information thats being posted from their browser. Using an SSL cert and forcing HTTPS will make it much harder (nearly impossible) to catch the traffic on the netwrok.