depending on what blog post I visit I need a uiwebview to be populated with one div class of html. It's the same div class of html every time, the name doesnt change, just the content.
The div class is called:
<div class="field-item even" property="content:encoded">
The content looks like this:
<div class="field-items"><div class="field-item even" property="content:encoded"><p>Data centre security is a big issue – especially for co-location centres hosting multiple racks for multiple, often competing, clients. Yet whilst security to access the data centre can often be impressive, individual rack level security is often inadequate. Given the number of in-house staff and external engineers, from cablers to storage and server providers, traipsing through a data centre on a near daily basis, poor rack level security is a potential risk.</p>
<p>According to a recent survey conducted by Lieberman Software, 42 percent of IT staff can get unauthorised access to their organisation’s most sensitive information – including the CEO’s private documents. The failing is blamed on management’s naivety when it comes to understanding just how much privileged access their IT departments actually have.</p>
<p>The fact that most racks are secured only with standard handles using a manual key, bears out this survey. Easily broken or bypassed, these locks provide minimal corporate protection; they offer no access control or audit trail of activity. Given the huge ongoing investment in data centres - during 2011-2012, the UK invested an estimated $3.35 billion in data centres; the second highest spending of any country, according to the Datacentre Dynamics Global Industry Census 2011 - extending standard access control techniques to the data centre racks is an important step.</p>
<p>Companies can opt for a rack specific key, combination locks or key cards that are IP enabled to allow an organisation to impose strict control over the time/day an individual is allowed to access the rack. Using standard access control software, all activity is recorded and audited, providing the organisation with a complete list of those who have accessed the racks.</p>
<p>For organisations, this approach adds control and addresses one important aspect of the internal threat. For co-location sites, rack level security removes the need to cage off client specific rack space areas freeing up space that can be used for more racks, delivering a return on investment, as well as improved client security.</p>
</div>
The page can be found here.
I need a method to get the html in that div just from the page url, keep the styling and put it into a string and then populate a uiwebview, can anyone help me please?
Thanks.
better code scenario:
Basically I populate a uitable view from an rss feed of blog posts. This is working fine. Clicking on the blog post returns a url of the blog post clicked on as a string to be used. Its from here where I dont understand how to load this string into a webview, then use jquery on that web view to only display all the content only in that one div class (which is mentioned above.)
You could load everything on the page into the webview, and then run a bit of javascript to hide everything but the div and the contents thereof. This stackoverflow question(well, really, the answer) looks like it would help you accomplish that. If the page is not large, this could be the way to go, as styling would be preserved as a nice side-effect.
Related
With v1 of the measurement protocol, you could use these parameters to add custom dimensions or change medium, source or refer for a page view:
https://ssl.google-analytics.com/collect?v=1&tid=UA-xxxxxxxx&cid=[custom-id]&t=pageview&dp=[Url of pageview]&dh=[hostname of pageview]&cm=[new-medium]&cs=[new-source]&dr=[new-referer]&cd1=[custom-dimension-1]&cd2=[custom-dimension-2]
How is it done in measurement protocol v2?
I couldn't find any documentation about the page-view-event in V2 (for example it's just not mentioned here
https://developers.google.com/analytics/devguides/collection/protocol/ga4/reference/events), even the event-builder (https://ga-dev-tools.web.app/ga4/event-builder/) doesn't support a simple page-view.
So, all I got so far is this:
$data = '
{ "client_id": "'.[custom-id].'",
"events": [
{
"name": "page_view",
"params": {
"page_location": "'.[Url of pageview].'"
}
}
]
}
';
So, what are possible parameters for a page-view-event?
Ok, a few things here right away that you should know if you're playing with MP:
Measurement protocol is a poor name. It implies there's more than one protocol for data gathering. There's none. There is just only one protocol for tracking.
MP2 still largely MP1. Google tries to pose GA4 as a new product, but it's just our old good GA UA with a simplified backend and overengineered front-end that tries to deliver the level of quality Site Catalyst/Omniture/Adobe Analytics have been delivering for a decade. MP is largely the same. dr, cm, cs and a lot of other fields are still there. cds aren't there anymore cuz they're replaced with eps and ups, but more about that a bit later.
GA4 uses this big marketing claim that the new analytics is so wonderfully event-based, unlike the old one. When I dug into why they keep claiming it everywhere, I realized that the only difference is that pageviews are now events. Not much difference really. But yes, a pageview is just an event named page_view. We'll talk about it a bit more later.
Custom dimensions are no more. Now they're called event properties and user properties. The same thing really, Google just tries to make it less obvious that there are no more session level custom dimensions. Or product-level CDs. Though the product level is seemingly on their roadmap.
Make sure you're using the correct measurement id. They made it a lot harder to find it in GA4. It's no longer just the property id visible in the property list, unfortunately.
GA's real-time reports don't include all dimensions, especially if those dimensions are involved in advanced metrics/dimensions calculations. Do not use real time reports for inspecting the content of your events. It's not meant for debugging. It's a vanity report. Still helpful to check the volume of events when you're sending a bunch and expect to see them in GA. Google even has a warning here:
Like the DebugView report, the Realtime report performs limited attribution analysis to ensure responsive reporting. We recommend that you refer to the Acquisition reports for the most accurate attribution information.
Finally, what I often do instead of reading the so-still-unfinished-and-not-really-helpful documentation on MP2, is either use a library like this.
Or, since 1 is the case, I would just implement a moniker tracking in my test GTM, then see what and how it sends to where in the Network debugger and simply reimplement it on my side exactly how GTM does it. No magic involved. Here is how my GTM tag would look like:
With a trigger on any click or any page load. After all is done, I publish the lib. Then I would inject this GTM's code in a local site, or in my test site, or however else you want to test it. And trigger the tag that you need to mimic with MP.
I use this wonderful extension to show all events that fire and their details right in my console.
Now this is how the above tag looks on my test site through the extension:
It's pretty useful.
How do I know that page_referrer is used as dr instead of ep in GTM? Here is the list of the fields that will never be seen as ep. But Google doesn't care enough to map them properly to what these fields are called in MP, so you either have to test, or know, or google it elsewhere.
Finally, here is how the network request looks like:
I published the tag to prod (I keep a test site in prod), so you can go and look at it. Or just find a site that uses GA4 and see its network requests. How does google know that this is a pageview? by the event name: en=page_view
Of course, you do the same with medium and source. Judging from the documentation I've linked to above, the medium and source look like campaign_source and campaign_medium in GTM. GTM maps them accordingly to cs and cm fields. And that's how you know these are the correct mp fields. Give GA time to process these and check on them in a few days.
Good, now this is applicable to the enhanced ecommerce hits too, it's just that they have more variables and data structures in them typically.
Finally, if you want to simulate batch events, you can just make a few tags fire in rapid succession and GTM will neatly pack them in one network request if they fit. You can then digest how the packing is done through the same methods as we do here and simulate.
I have a requirement where we need to show around 24k records which has 84 cols in one go, as user wants filtering on entire set of data.
So can we have virtual scrolling mechanism with ag-grid without lazy loading?? If so could you please here. Any example are most welcome for reference.
Having tried this sort of thing with a similar number of rows and columns, I've found that it's just about impossible to get reasonable performance, especially if you are using things like "framework" renderers. And if you enable grouping, you're going to have a bad time.
What my team has done to enable filtering and sorting across an entire large dataset includes:
We used the client-side row model - the grid's simplest mode
We only load a "page" of data at a time. This involves trial and error with a reasonable sample of data and the actual features that you are using to arrive at the maximum page size that still allows the grid to perform well with respect to scrolling / rendering.
We implemented our own paging. This includes display of a paging control, and fetching the next/previous page from the server. This obviously requires server-side support. From an ag-grid point of view, it is only ever managing one page of data. Each page gets completely replaced with the next page via round-trip to the server.
We implemented sorting and filtering on the server side. When the user sorts or filters, we catch the event, and send the sort/filter parameters to the server, and get back a new page. When this happens, we revert to page 0 (or page 1 in user parlance).
This fits in nicely with support for non-grid filters that we have elsewhere in the page (in our case, a toolbar above the grid).
We only enable grouping when there is a single page of data, and encourage our users to filter their data to get down to one page of data so that they can group it. Depending on the data, page size might be as high as 1,000 rows. Again, you have to arrive at page size on a case-by-case basis.
So, in short, when we have the need to support filtering/sorting over a large dataset, we do all of the performance-intensive bits on the server side.
I'm sure that others will argue that ag-grid has a lot of advanced features that I'm suggesting that you not use. And they would be correct, for small-to-medium sized datasets, but when it comes to handling large datasets, I've found that ag-grid just can't handle it with reasonable performance.
I'm designing a ticket booking API. Right now booking a ticket resolves into POST /users/{id}/tickets but each /events/{id} has a maximum of available tickets. How do I properly design a check?
I've come up with two ways:
1) having an availibleTickets: field into the /events/{id} that gets checked and possibly updated each time I POST a new ticket.
2) having a maxTickets: field into /events/{id} and check the length of GET /events/{id}/tickets array, compare it to maxTickets
Anyway I have to perform a GET request inside the POST handler but it doesn't look right to me, do you have any suggestions?
How would you desing a ticketing system for a Web page? The same steps you apply to a Web page also apply to REST as it is just a generalization of the same interaction flow used on the Web.
Usually, on the Web you have a link you can see an event you can order tickets for. On this page you have a link to order tickets for that particular show. Depending on the system you use, you might see a layout of the event venue in the form of buttons or images to click if there is a certain seat order where available seats are marked as green and ones that are already booked as red or whatever color scheme you use. A click on a seat will trigger some reservation logic on the server that returns almost the same page as before but this time with the seat marked as orange to indicate a reservation. Next you click the available seat next to that seat to reserve a further seat. This story continues until you either have enough seats marked as reserved or no available seats are available and you have no options left as to either cancel the reservation, proceed to the order step or unreserve seats you marked as reserved beforehand. Once you are satisfied with your choice, you will find an order or submit button or link where you turn your reservation into a booking. This might involve some further steps like entering your contact and/or billing information. Though this is in principle how I'd design such a system for the Web.
As you might see, this turns out into some kind of state machine where the server tells you all of the options you have available at this current state of the process. This is exactly what Asbjørn Ulsberg mentiones when talking about affordance and state machines. From the blueprint of the venue and the respective seats on that blueprint, which are actually buttons or images you might click, you knew what these widges are for and you somehow know what will happen when you click on one of the seats. This is what affordance is all about. By seeing it you know what you can do with it.
The interaction concept outlined above should be taken and translated to REST. As a client you don't need to know the structure of the URI, all you need to know is what seats are available and what happens when you click certain links. This is usually done in REST through link relation names that give the mentioned link some semantical context to the current state of the resource the client just fetched. Such link-relations may seem like a-priori knowledge needed by the client, which is a bit anti-REST, as REST tries to decouple clients from servers to allow the latter one to evolve freely without risking clients to break, though as link-relations should be standardized, or should be based on extensions, such as dublin-core or other microformats. Buidling up on standards will either lead to broad acceptance and support by different clients or on mechanisms to plug-in such knowledge into a client later on. This in general avoids so-called out-of-band information or process flows that force you to lookup up the manual on how to use that system.
The approach outlined above would utilize an own reservation resource that is uniquely created on "entering" the reservation, which is kept till the order ticket step is invoked. This reservation resource keeps track of the reserved seats the user has chosen so far. Whether the system considers reserved seats by other users as taken or not is an implementation detail. It is ok to either use a first-come system or a more polite one that guarantees the reserver his seats until some grace-period has passed and the user didn't order them. This gives you a good impression that such resources can be volatile and just be part of a certain process.
In regards whether to use GET, POST or other HTTP methods, a Web page that sends you to a reservation page will show you a form containing all of the seats of the venue. As HTML does only support GET or POST, the latter one is the most appropriate thing. In a REST or HTTP API you might use PUT though. A server might already have assigned you a certain, unique "reservation" link that you can just invoke with PUT. If the reservation resource does not exist yet, it will be created for you, if it did, the whole content will just be updated. Especially when you dealing with reservations and money flows you want to use idempotent methods such as PUT.
I hope I could give you some ideas on how you might design your reservation system by letting a server teach a client everything it needs to know to proceed through its task.
It's inside the post method (server-side) that you must check if tickets are available before book the event.
you can create a specific route to know how many tickets is available if needed. the client could call it before book an event. Or give the availibleTickets in the get /events/{id}
Imagine 10 client trying to buy the last ticket at the same time, if the security is not in the post method, you'll book 9 imaginary tickets
Ember routing works nicely when working with strict linear paths of resources. However, it's becoming more prevalent in mobile design for apps — such as the Facebook app — to use infinite stacks of multiple interconnected resources:
User starts in the feed.
Presses on a link to a user profile.
Navigates to user's list of friends.
Visits a new user profile.
Goes to any other types of resources such as posts, groups etc.
THEN can navigate all the way back with each page state persisted.
We start off in a known resource - let's say it's Facebook's news feed. In Ember, we'd define that route as:
this.route('feed');
But from there, we'd like to be able to visit any combination of our resources - whilst still maintaining the state of each route. The temptation is to solve the problem through some funky route solution, such as using catch-all route definitions:
{ path: '*' }
But that'd take some heavy path management as discussed here (or perhaps there's some method of utilising Tilde's router.js URL generation?!). But as per the illustrated example above, it would leave us with huge goddamn route paths:
/feed/users/:user_id/friends/users/:user_id/another_resource/:another_resource_id
Not that that's a problem in a mobile app, but I'm not sure if it's the optimal way of achieving this.
So that leads me to consider whether there's some method of dynamically creating outlets as stacks get deeper (akin to modals) - or whether the only way to achieve state persistence is using an app level object, Ember data or an Ember service to track/persist history & state.
Anyway, tl;dr, I'm not desperately needing this functionality - just interested if anyone has a smart insight into how achieve this ...umm ... infinite interconnected nested resource stack thingy.
The answer does not lie in nesting routes in an attempt to prevent them from being torn down.
Instead the answer lies in state management.
Browser history can be used to manage URLs and bound to the back buttons on each page in our stack. However, restoring the exact state of the page (including scroll position, especially when models may be lazy loaded) requires some additional design.
The easiest method of doing it is using the ember-state-services addon.
In particular, this video by Travis Hoover from Oct '15 was really helpful. It explains how ember-state-services creates 'buckets' for different model instances.
So, when navigating through our Facebook stack, the state of each page can easily be stored & restored even if we visit pages which reference the same route/controller. In our example, it helps with preserving the state of the two user profiles we navigate to (user/:user_id).
So, for storing each user profile pages' scroll positions, get the scroll offset from your scrolling div/component and use ember-state-services as so:
// app/controllers/feed
scrollPos: stateFor('scrollPos', 'model')
// app/routes/feed
saveScrollPos: Ember.on('deactivate', () =>
this.set('scrollPos', scrollValue);
));
It'll store your last scroll positions on users/1 AND users/2 separately because state is bound to the particular user models.
The only gotcha I can foresee is if the user was to visit the exact same route multiple times in one stack, but there's not many use cases where that would be a problem.
One of our web site is a common "Announce for free your apartment".
Revenues are directly associated to number of public usage and announces
registered (argument of our marketing department).
On the other side, REST pushes to maintain a clear api when designing your
api (argument of our software department) which is a data stealing
invitation to any competitors. In this view, the web server becomes
almost an intelligent database.
We clearly identified our problem, but have no idea how to resolve these
contraints. Any tips would help?
Throttle the calls to the data rich elements by IP to say 1000 per day (or triple what a normal user would use)
If you expose data then it can be stolen. And think about search elements that return large datasets even if they are instigated by javascript or forms - I personally have written trawlers that circumvent these issues.
You may also think (if data is that important) about decrypting it in the client based on keys and authentication sent from the server (but this only raises the bar not the ability to steal.
Add captcha/re-captcha for users who are scanning too quickly or too much.
In short:
As always only expose the minimum API to do the job (attack surface minimisation)
Log and throttle
Force sign in(?). This at least MAY put off some scanners
Use capthca mechanism for users you think may be bots trawling your data