What is the cs_PostViews Table used for in Community Server - community-server

I'm running Community Server 2.1 at the moment. My cs_PostViews table is at 200MB. What is the exact use of this table. Is it safe to truncate?

Found out myself.
This table holds a list of every single unique URL that was requested and served. I.e. no 404 type URLs and the number of times it has been requested.
I ended up truncating this table with no ill effects so far.

Related

Unable to practice Sql Injection (blind) on DVWA 1.10

I am practicing for Security Testing. I came across DVWA and I started practicing for Sql Injection. I was doing fine till I started with SQL Injection (blind). No matter which query I try I am not getting the desired result.
For eg :
1' and 1=0 union select null,table_name from information_schema.tables#
simply returns User ID exists in the database.
I have set the DVWA Security to Low. Also made sure there are no errors on setup page of the application under Setup Check section.
Following are environment details:
Operating system: Windows
Backend database: MySQL
PHP version: 5.6.16
I think the answer is here and the behavior is expected
https://github.com/ethicalhack3r/DVWA/issues/12
Someone complained of the opposite behavior and the developer agreed, and a contributor named g0tm1lk fixed it. He made the exercise really "blind" and we have to use blind injection methods to test the vulnerability.
Showing the SQL error messages to the user is just: a SQL injection vuln + a misconfiguration issue.
A blind SQL injection might occur when the columns of the results returned by a query are not shown to the user. However, the user can tell somehow if the query returned any records or none.
E.g.: Suppose the url "http://www.example.com/user?id=USER_ID" returns:
200 if USER_ID exists
404 if USER_ID not exists
But it won't show any information from the query results (e.g. username, address, phone, etc)
If the page is vulnerable to SQLi [blind], an attacker won't be able get info from the DB printed in the result page, but he might be able to infer it by asking yes/no questions.

Strongloop API response limit over Oracle Database

I've just started using Strongloop to define a REST api over my oracle database.
Everything works fine when I check my API using "localhost:3000/explorer".
For instance, when I send a "get" to list all persons, the server answers with the list of people in the PERSONS table.
The issue is that the server does not return all the records in the table.
It returns a 100 records only, knowing that the table contains more than a 100 records.
Am I missing something?
I found the solution, in case someone faces the same issue.
The problem is that in loopback-connector-oracle, the maximum number of rows is set to 100.
To change the maximum rows you should :
1- In "datasources.json" file, set the property "maxRows" to the number you want, for instance "maxRows":1000
2- Replace the file \node_modules\loopback-connector-oracle\lib\oracle.js with the file oracle.js
3- Restart your API, now it will return more than 100 records
See this link for more details about the issue
I don't think there is any such thing, by default it will fetch all the records.
Please check your table/database settings.

Firebase REST API: Delete sometimes fails

I'm currently building a web frontend for a Matlab program. I'm using webread/webwrite to interface with the Firebase realtime database (Though I'll be shifting to urlread2 soon for compatibility reasons). The Matlab end has to delete nodes from the database on a regular basis. I do this by using webwrite to send a POST request and putting "X-HTTP-Method-Override: DELETE" in the header. This works, but after a few deletes it stops working until data is either added to or removed from the database. It seems completely random, my teammate and I have been trying to find a pattern for a few days and we've found nothing.
Here is the relevant Matlab code:
modurl = strcat(url, modkey, '.json');
modurlstr = char(modurl);
webop = weboptions('KeyName', 'X-HTTP-Method-Override', 'KeyValue','DELETE');
webwrite(modurlstr, webop);
Where url is our database url and modkey is the key of the node we're trying to delete. There's no authentication because the database is set to public (Security is not an issue for us).
The database is organized pretty simply. The root node just has a bunch of children. We only delete a whole child (i.e. we don't ever try to delete the individual components of a child).
Are we doing something wrong?
Thanks in advance!
We found out some of the keys had hyphens in them, which were getting translated to their ascii representation. The reason it seemed random was because the delete was only bugging out on the nodes which had a hyphen in their keys. When we switched them back everything worked fine.

How to implement robust pagination with a RESTful API when the resultset can change?

I'm implementing a RESTful API which exposes Orders as a resource and supports pagination through the resultset:
GET /orders?start=1&end=30
where the orders to paginate are sorted by ordered_at timestamp, descending. This is basically approach #1 from the SO question Pagination in a REST web application.
If the user requests the second page of orders (GET /orders?start=31&end=60), the server simply re-queries the orders table, sorts by ordered_at DESC again and returns the records in positions 31 to 60.
The problem I have is: what happens if the resultset changes (e.g. a new order is added) while the user is viewing the records? In the case of a new order being added, the user would see the old order #30 in first position on the second page of results (because the same order is now #31). Worse, in the case of a deletion, the user sees the old order #32 in first position on the second page (#31) and wouldn't see the old order #31 (now #30) at all.
I can't see a solution to this without somehow making the RESTful server stateful (urg) or building some pagination intelligence into each client... What are some established techniques for dealing with this?
For completeness: my back-end is implemented in Scala/Spray/Squeryl/Postgres; I'm building two front-end clients, one in backbone.js and the other in Python Django.
The way I'd do it, is to make the indices from old to new. So they never change. And then when querying without any start parameter, return the newest page. Also the response should contain an index indicating what elements are contained, so you can calculate the indices you need to request for the next older page. While this is not exactly what you want, it seems like the easiest and cleanest solution to me.
Initial request: GET /orders?count=30 returns:
{
"start"=1039;
"count"=30;
...//data
}
From this the consumer calculates that he wants to request:
Next requests: GET /orders?start=1009&count=30 which then returns:
{
"start"=1009;
"count"=30;
...//data
}
Instead of raw indices you could also return a link to the next page:
{
"next"="/orders?start=1009&count=30";
}
This approach breaks if items get inserted or deleted in the middle. In that case you should use some auto incrementing persistent value instead of an index.
The sad truth is that all the sites I see have pagination "broken" in that sense, so there must not be an easy way to achieve that.
A quick workaround could be reversing the ordering, so the position of the items is absolute and unchanging with new additions. From your front page you can give the latest indices to ensure consistent navigation from up there.
Pros: same url gives the same results
Cons: there's no evident way to get the latest elements... Maybe you could use negative indices and redirect the result page to the absolute indices.
With a RESTFUL API, Application state should be in the client. Here the application state should some sort of time stamp or version number telling when you started looking at the data. On the server side, you will need some form of audit trail, which is properly server data, as it does not depend on whether there have been clients and what they have done. At the very least, it should know when the data last changed. No contradiction with REST here.
You could add a version parameter to your get. When the client first requires a page, it normally does not send a version. The server replies contains one. For instance, if there are links in the reply to next/other pages, those links contains &version=... The client should send the version when requiring another page.
When the server recieves some request with a version, it should at least know whether the data have changed since the client started looking and, dependending of what sort of audit trail you have, how they have changed. If they have not, it answer normally, transmitting the same version number. If they have, it may at least tell the client. And depending how much it knows on how the data have changed, it may taylor the reply accordingly.
Just as an example, suppose you get a request with start, end, version, and that you know that since version was up to date, 3 rows coming before start have been deleted. You might send a redirect with start-3, end-3, new version.
WebSockets can do this. You can use something like pusher.com to catch realtime changes to your database and pass the changes to the client. You can then bind different pusher events to work with models and collections.
Just Going to throw it out there. Please feel free to tell me if it's completely wrong and why so.
This approach is trying to use a left_off variable to sort through without using offsets.
Consider you need to make your result Ordered by timestamp order_at DESC.
So when I ask for first result set
it's
SELECT * FROM Orders ORDER BY order_at DESC LIMIT 25;
right?
This is the case when you ask for the first page (in terms of URL probably the request that doesn't have any
yoursomething.com/orders?limit=25&left_off=$timestamp
Then When receiving your data set. just grab the timestamp of last viewed item. 2015-12-21 13:00:49
Now to Request next 25 items go to: yoursomething.com/orders?limit=25&left_off=2015-12-21 13:00:49 (to lastly viewed timestamp)
In Sql you would just make the same query and say where timestamp is equal or less than $left_off
SELECT * FROM (SELECT * FROM Orders ORDER BY order_at DESC) as a
WHERE a.order_at < '2015-12-21 13:00:49' LIMIT 25;
You should get a next 25 items from the last seen item.
For those who sees this answer. Please comment if this approach is relevant or even possible in the first place. Thank you.

Can Primary-Keys be re-used once deleted?

0x80040237 Cannot insert duplicate key.
I'm trying to write an import routine for MSCRM4.0 through the CrmService.
This has been successful up until this point. Initially I was just letting CRM generate the primary keys of the records. But my client wanted the ability to set the key of a our custom entity to predefined values. Potentially this enables us to know what data was created by our installer, and what data was created post-install.
I tested to ensure that the Guids can be set when calling the CrmService.Update() method and the results indicated that records were created with our desired values. I ran my import and everything seemed successful. In modifying my validation code of the import files, I deleted the data (through the crm browser interface) and tried to re-import. Unfortunately now it throws and a duplicate key error.
Why is this error being thrown? Does the Crm interface delete the record, or does it still exist but hidden from user's eyes? Is there a way to ensure that a deleted record is permanently deleted and the Guid becomes free? In a live environment, these Guids would never have existed, but during my development I need these imports to be successful.
By the way, considering I'm having this issue, does this imply that statically setting Guids is not a recommended practice?
As far I can tell entities are soft-deleted so it would not be possible to reuse that Guid unless you (or the deletion service) deleted the entity out of the database.
For example in the LeadBase table you will find a field called DeletionStateCode, a value of 0 implies the record has not been deleted.
A value of 2 marks the record for deletion. There's a deletion service that runs every 2(?) hours to physically delete those records from the table.
I think Zahir is right, try running the deletion service and try again. There's some info here: http://blogs.msdn.com/crm/archive/2006/10/24/purging-old-instances-of-workflow-in-microsoft-crm.aspx
Zahir is correct.
After you import and delete the records, you can kick off the deletion service at a time you choose with this tool. That will make it easier to test imports and reimports.