Simple question on DotNetNuke Module development - content-management-system

If you are developing multiple modules on a webpage displaying different things such as content like a bio or pictures based on a userId passed through the Query String.
At page load should all modules on the webpage act independently and individually look at the query string and return the content based on the userId.
And in the same way should the modules individually check if the correct user is logged in before they are allowed to modify the content.
I have made a one or two modules before for a website but this is the first time I am developing a DotNetNuke website and I am just unsure if this is the only way.

Your user control should already inherit from DotNetNuke.Entities.Modules.PortalModuleBase. If so, you can use the this.UserInfo.UserID property to retrieve the User's ID. This will be much safer than looking at the query string. Remember that the user may not be logged in, and in that case the above would cause a null reference - so be sure to test for null first.
Also, on a somewhat related note, you can use this.UserInfo.IsInRole("RoleName") to test to see if a particular user is in a given role.

Each module should work independently of the other ones.
Also, I don't think you should look at the querystring to get your user id because that can be spoofed. Instead, look at the base class for your module to see if there is a property containing the user info.

Related

Security warning from extension_builder: action is publicly accessible

I created an extension with the extension builder.
On saving I get this message:
The object was updated. Please be aware that this action is publicly accessible unless you implement an access check. See https://docs.typo3.org/typo3cms/extensions/extension_builder/User/Index.html
How can I fix this issue? Yes I read the page but there are no useful hints.
Since the question is how you can "fix the issue": There is no issue, it is a warning, you can remove it and make your request secure. (As in the other answer.)
The "hint" on the page is actually very straightforward. The "issue", that a user is able to manipulate the url and make the server to execute a not wanted action.
Here is an example:
You have a list of users of your page and you can open thier public porfile for more information:
https://yourdomain.com/list/?tx_ext_plugin['action']=show&tx_ext_plugin['userId']=41.
So if I want to make some trouble, I change the action "show" to "delete" and may I am able to delete the poor user "41" from the db. That is bad.
https://yourdomain.com/list/?tx_ext_plugin['action']=delete&tx_ext_plugin['userId']=41.
So since it is you business logic typo3 offers no out of the box solution for this. That is why this warning from extension builder says, that you need to make actions to prevent misuse.
Regarding how to implemnt a better security here are some thoughts about the Access Control and some ideas what to implement in your actions:
1) FE
You can separate your actions into different plugins. So if you have a public list action it can not be modified to the plugin that responsible for the delete action. How is it possible? TYPO3 will look the page record in your database. And will render it, and if there is a plugin on the page with the signature "tx_ext_plugin" then it will get the sent parameters. In this case you have the possibility to add the different plugins to different pages so changing the signature of it for an attacker won't help, because:
If the delete action is not registered by the plugin, TYPO3 will
throw an exception.
If you are trying to change the whole signature the page won't be able to identify the plugin.
You can add the edit / delete plugin to pages where a user has to be logged in. You can even manage multiple usergroups. Like normal user can only edit its profile, but a premium user can make further changes. You can use in fluid a view helper IfHasRole that can show parts of your template for defined user groups. (There is an ifAuthenticated ViewHelper too)
You can take the extension "femanager" as an example. There is a controller "EditController", that covers actions like "update" and "delete". For example before making the update action there is a check if the logged in user has the same user id as the record which going to be changed. If you have a complex example you can make a check on the user group also.
2) BE
It is actually almost the same as frontend.
BUT instead of plugins / user groups assigned in page settings. You can use different mountpoints, so BE users can not see folders where they are not allow to edit / delete.
You have those two ViewHelper for the BE too. There names are: f:be:security.ifAuthenticated and f:be:security:ifHasRole. However ifAuthenticated is also for FE, in a BE context it does not make sense.
You have also the possibility to identify the id and userGroups of the BE user and you can make your own checks before you let an action run.
You have also the possibility to turn on / off a module for a certain BE group.
+1: It is nothing to do with any action but just to list it too. There is also the possibility to allow / disallow field for BE Users by editing a record through the List mode in the BE.
Extension builder creates dummy actions to update and create records. Those example actions do not contain any security checks, whether the caller actually is allowed to do so.
So it is your job to add adequate access control to those methods. E.g. make sure the current user (be it Frontend or Backend) is actually allowed to update the model in question.

Facebook note shared info

My client need to randomize over everyone that shared a specific note on his facebook page, it's like a raffle, however, i didn't found a way to get any info, even the name in a way that i can randomize over them, is there any way that i can fetch this information?
You can query the note for the comments or likes if you are looking at a specific note. Or you can query the user for all their notes and loop over that.
Depending on what language you are using, randomly choosing from an array of items should be trivial.

MVC2 Routing and Security: How to securely pass parameters?

I'm a relative MVC noob coming from WebForms. I think I have a pretty good grasp of MVC with a couple exceptions, and I think I may have broken the pattern. I'm gonna try to keep this short, so I'm assuming that most of what I am asking is relatively obvious.
Let's say I have a news site with articles. In that case, a URL in the form of mynewssite.com/Articles/123 works just great because I don't care who views which article. The user can change the ArticleID in the URL to whatever they want and pull up that article. In my case, however, I only want the user to be able to view/edit data entities (articles, or whatever) that belong to them. To achieve this, I am using their UserID (GUID) as a foreign key in the database, and displaying a list of their data for them to choose from. Here comes the problem... when they click on the link that is created by Url.Action("Edit", New With {.id = item.id}) (I'm not using ActionLink because I need to add HTML content inside the link), the id shows up as a querystring parameter. I could add a route for it, but the id would still show up in the URL for them to tamper with. The obvious implication is that by tampering with the URL, they could view/edit any entity that they want.
What am I missing?
Is there a good way to pass the parameters without adding them on the URL? I know that I could put them in a form on the page and submit the form, but that seems cumbersome for my example, and I'm using jQuery.ajax in places that seems to conflict with this idea.
I could also check their UserID against the data in the Edit method, but that also seems cumbersome, too.
Is this question too broad? Please let me know what specifics you need. Thanks.
Even in Winforms, you would have to add special logic on each request to filter only the articles that the user owns. I don't see why MVC should be any different. Sure, you can use web.config to deny access to given url's, but not when you use a single page that takes a parameter of what data to show.
Your best bet is probably to filter this at the database level. By adding a where clause that includes the user id, then the app will return a "no records found" sort of error, and you can do whatever you want with it.
You could use forms authentication. This way when the user authenticates an encrypted cookie will be emitted which will contain his username which cannot be tampered with. Then you could verify whether the currently connected user has authorizations to edit this article.

Facebook Graph API: Getting the total number of posts

I've been using the Facebook Graph API to display user posts. When I get the initial "page" of posts, the resulting data object has a paging property object with a previous and next URL property. I was hoping to generate navigation links based on this available paging information. However, sometimes these URLs point to an empty set of data, so I obviously don't want to navigate the user to an empty page.
Is there a way to find the total count of objects in a collection so that better navigation can be derived? Is there any way to get smarter paging data?
Update:
Sorry if my post isn't clear. To illustrate, look at the data at https://graph.facebook.com/7901103/posts and its paging property URLs. Then follow those URLs to see the issue: empty pages of data.
Since it pages the datas with date-time base. You can't get the knowledge of whether if there are datas or not before you actually send the request to it. But you can preload the data from previous url to determine is it suitable to dispaly a previous link in your web page.
Why be dependent of Facebook?
Why don't you preload all data for a user and save into a database. Then you fetch the posts from db and show to user. This way you have all the control on how many posts there are and how to manage next and prev.
I was going to try to post this as a comment to your question, but I can't seem to do so...
I know that the Graph API returns JSON, and while I've never come across a way to have the total number of posts returned, depending on what technology you are using to process the response, you might be able to capture the size of the JSON array containing the posts.
For example, if I were using a java application I could use the libraries available at json.org (or Google GSON, or XStream with the JSON driver) to populate an object and then simply use the JSONArray.length() method to check for the number of posts returned.
see:
http://www.json.org/javadoc/org/json/JSONArray.html
It might seem like a bit of a simplistic solution, but might be the type of work around you require if you can't find a way to have Facebook return that data.
Can you specify what technology your application is based in?

Best way to store data for Greasemonkey based crawler?

I want to crawl a site with Greasemonkey and wonder if there is a better way to temporarily store values than with GM_setValue.
What I want to do is crawl my contacts in a social network and extract the Twitter URLs from their profile pages.
My current plan is to open each profile in it's own tab, so that it looks more like a normal browsing person (ie css, scrits and images will be loaded by the browser). Then store the Twitter URL with GM_setValue. Once all profile pages have been crawled, create a page using the stored values.
I am not so happy with the storage option, though. Maybe there is a better way?
I have considered inserting the user profiles into the current page so that I could all process them with the same script instance, but I am not sure if XMLHttpRequest looks indistignuishable from normal user initiated requests.
I've had a similar project where I needed to get a whole lot of (invoice line data) from a website, and export it into an accounting database.
You could create a .aspx (or PHP etc) back end, which processes POST data and stores it in a database.
Any data you want from a single page can be stored in a form (hidden using style properties if you want), using field names or id's to identify the data. Then all you need to do is make the form action an .aspx page and submit the form using javascript.
(Alternatively you could add a submit button to the page, allowing you to check the form values before submitting to the database).
I think you should first ask yourself why you want to use Greasemonkey for your particular problem. Greasemonkey was developed as a way to modify one's browsing experience -- not as a web spider. While you might be able to get Greasemonkey to do this using GM_setValue, I think you will find your solution to be kludgy and hard to develop. That, and it will require many manual steps (like opening all of those tabs, clearing the Greasemonkey variables between runs of your script, etc).
Does anything you are doing require the JavaScript on the page to be executed? If so, you may want to consider using Perl and WWW::Mechanize::Plugin::JavaScript. Otherwise, I would recommend that you do all of this in a simple Python script. You will want to take a look at the urllib2 module. For example, take a look at the following code (note that it uses cookielib to support cookies, which you will most likely need if your script requires you to be logged into a site):
import urllib2
import cookielib
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookielib.CookieJar()))
response = opener.open("http://twitter.com/someguy")
responseText = response.read()
Then you can do all of the processing you want using regular expressions.
Have you considered Google Gears? That would give you access to a local SQLite database which you can store large amounts of information in.
The reason for wanting Greasemonkey
is that the page to be crawled does
not really approve of robots.
Greasemonkey seemed like the easiest
way to make the crawler look
legitimate.
Actually tainting your crawler through the browser does not make it that more legitimate. You are still breaking the terms of use of the site! WWW::Mechanize for example is equally well suited to 'spoof' your User Agent String, but that and crawling is, if the site does not allow spiders/crawlers, illegal!
The reason for wanting Greasemonkey is that the page to be crawled does not really approve of robots. Greasemonkey seemed like the easiest way to make the crawler look legitimate.
I think this is the the hardest way imaginable to make a crawler look legitimate. Spoofing a web browser is trivially easy with some basic understanding of HTTP headers.
Also, some sites have heuristics that look for clients that behave like spiders, so simply making requests look like browser doesn't mean the won't know what you are doing.