SparkAR Make network request without NetworkingModule - facebook

Since currently, at time of post, NetworkingModule is not available, is there any other way of making a network request?
We’re temporarily disabling this capability while we work to improve
performance and functionality. We'll reintroduce networking in the
near future.
Source: https://sparkar.facebook.com/ar-studio/learn/documentation/reference/classes/networkingmodule

I guess they have safety/security reasons to disable this feature. While you are inside the effect environment can't do much. Everything else will be unwanted by facebook.

All networking capabilities have been removed from SparkAR hub due to security reasons. You can't make network requests.
If you need networking capabilities for adding Videos I can help you out :)

Related

Where I can store data for spreading them online?

My company have an application which could be installed with Qt Online-Installers. The data are stored on the our personal server, but, with time, we found out, that the internet connection is a bit slow for users on the other edge of the world. So, there is a question - "What services are we able to use to store these data, which are designed for these purposes?". When I was investigating this question I found the Information about the thing which is called "Content Delivery Network", but I'm not sure if it's something fits or not.
Unfortunately, I don't have enough experience in this area, so, maybe somebody knows more and could give me an advice. Thank you!
Cloudfront on AWS . Depends on what your content is but can probably store it on s3 and then use Cloudfront to cache it at edge locations across the globe.
Your research led you to the right topic because it sounds like you could benefit from a CDN. CDNs store cached versions of your website, download files, video, etc. on their servers which is often a distributed network of servers across the globe, known as 'Points of Presence' (PoPs). When a user requests a file from your website, assuming it is leveraging a CDN, the user request actually goes to the closest POP and retrieves the file. This improves performance because the user may be very far from your origin server, or your origin server may not have enough resources to answer every request by itself.
The amount of time a CDN caches objects from your site depends on configurable settings. You can inform the CDN on how to cache objects using HTTP cache headers. Here is an intro video from Akamai, the largest CDN, with some helpful explanation of HTTP caching headers.
https://www.youtube.com/watch?v=zAxSE1M4yKE
Cheers.

Which exploit and which payload use?

Hi everyone and sorry for my bad English.
I'm learning penetration testing.
After reconnaissance and scanning of my target, I have enough information to pass to next phase.
Some info I have is open ports with related running services, names of the services, service's versions, operative system of the device, firewalls used, etc.)
I launched the mfs console.
I should find the correct exploit and payload, based on the information collected to gain access. I've read the Metasploit Unleashed guide on offensive-security. I've learned the Metasploit Fundamentals and the use of mfs console.
But I don't understand the way to start all of this. Assuming that my target has 20 ports open, I want test the vulnerability using an exploit payload that do not require user interaction. The possibilities of which exploit and payloads to use are now reduced, but are always too. Searching and testing all exploit and payloads for each ports isn't good! So, if i don't know the vulnerability of the target, how do I proceed?
I would like to be aware of what I do. and do not try without understanding.
Couple of things:
We have a stack exchange for security! Check it out at https://security.stackexchange.com/
For an answer: you want to look for "remote exploits", as those do not require user interaction. you can find a curated list of exploits here: https://www.exploit-db.com/remote/
You can search the services on this page for something that matches the same service/version as your attack vector.

CDN for a RESTful API?

I have a RESTful API with resources updates once a week. That is, for each resource, I update it once and week and allow clients to access it. It's an ever changing calculator.
There are probably 10,000 resources which could be requested.
Is it possible to put something like this behind a CDN? Traditionally CDNs are for undeniably static content, ie images. I'm not sure where my situation sits in the spectrum of dynamic <-> static.
Cheers
90% of the resources might not even get called, and if they are, will
get called a few times only. It wont be a mass of repetitive calls.
Right there in your comments, you just showed me that a CDN is not beneficial to you.
Usually how a CDN works is the first call it is downloaded from the main server to the regional CDN node then delivered to the client meaning the first GET will have no improvements. The following GETs to the same regional node will have the speed improvement. If you have little to no repetitive calls, then you will not see any noticeable improvement.
As I said in the comments, for small files, clients are probably spending as much time on the DNS lookup as they are on the download. Look into a global DNS solution (like Anycast) to reduce connection times. This is easy to setup and requires little to no maintenance.
I think it's entirely reasonable to put it behind a CDN if you think your content will reach the appropriate level of scale. As long as the cache-control headers are set such that the latest content is loaded when the cached version may be stale, you'll be fine.
The main benefit of CDNs comes when resources are requested from a variety of different sources, and so siteY.com can use the same cached version of a resource as siteX.com. Do you anticipate your resources will be requested from various different sources?

Do I need 'taint' mode for an internal website?

The medium-sized internal-only website that I came in to support has about 1/2 the *.cgi files without 'taint' mode. Do I need 'taint' mode for an internal website?
Do you trust the internal users? If not, then yes.
Let's say you do trust your internal users and don't need taint at the moment. You could consider leaving taint ON in any existing scripts, if only to train yourself in how to use taint. It's not as bad as it feels at first, kind of like walking on coals. Gets better.
I can say that I've had more than one 'internal' website suddenly (requirements changed) become customer facing, exposed to the internet, and needing better security.
Another thing to keep in mind is that internal users are sometimes the most disgruntled and most likely to want to hurt your organization is some petty way.

What technical considerations must a system/network administrator worry about when a site gets onto social bookmarking/sharing sites?

The reason I ask is that Stack Overflow has been Slashdotted, and Redditted.
First, what kinds of effect does this have on the servers that power a website? Second, what can be done by system administrators to ensure that their sites remain up and running as best as possible?
Unfortunately, if you haven't planned for this before it happens, it's probably too late and your users will have a poor experience.
Scalability is your first immediate concern. You may start getting more hits per second than you were getting per month. Your first line of defense is good programming and design. Make sure you're not doing anything stupid like reloading data from a database multiple times per request instead of caching it. Before the spike happens, you need to do some fairly realistic load tests to see where the bottlenecks are.
For absurdly high traffic, consider the ability to switch some dynamic pages over to static pages.
Having a server architecture that can scale also helps. Shared hosts generally don't scale. A single dedicated machine generally doesn't scale. Using something like Amazon's EC2 to host can help, especially if you plan for a cluster of servers from the beginning (even if your cluster is a single computer).
You're next major concern is security. You're suddenly a much bigger target for the bad guys. Make sure you have a good security plan in place. This is something you should always have, but it become more important with high usage.
Firstly, ask if you really want to spend weeks and thousands of $ on planning for something that might not even happen, and if it does happen, lasts about 5 hours.
Easiest solution is to have a good way to switch to a page simply allowing a signup. People will sign up and you can email them when the storm has passed.
More elaborate solutions rely on being able to scale quickly. That's firstly a software issue (can you connect to a db on another server, can you do load balancing). Secondly, your hosting solution needs to support fast expansion. Amazon EC2 comes to mind, or maybe slicehost. With both services you can easily start new instances ("Let's move the database to a different server") and expand your instances ("Let's upgrade the db server to 4GB RAM").
If you keep all data in the db (including sessions), you can easily have multiple front-end servers. For the database I'd usually try a single server with the highest resources available, but only because I haven't worked with db replication and it used to be quite hard to do, at least with mysql. Things might have improved.
The app designer needs to think about scaling up (larger machines with more cores and higher performance) and/or scaling out (distributing workload across multiple systems). The IT guy needs to work out how to best support that. The network is what you look at first, because obviously everything rides on top of it. Starting at the border, that usually means network load balancers and redundant routers being served by multiple providers. You can also look at geographic caching services and apps such as cachefly.
You want to reduce your bottlenecks as much as possible. You also want to design the environment such that it can be scaled out as needed without much work. Do the design work up front and it'll mean less headaches when you do get dugg.
Some ideas (of what I used in the past and current projects):
For boosting performance (if needed) you can put a reverse-proxying, caching squid in front of your server. Of course that only works if you don't have session keys and if the pages are somewhat static (means: they change only once an hour or so) and not personalised.
With the squid you can boost a bloated and slow CMS like typo3, thus having the performance of static websites with the comfort of a CMS.
You can outsource large files to external services like Amazon S3, saving your server's bandwidth.
And if you are able to spend some (three-figures per month) bucks, you can as well use a Content Delivery Network. Whith that in place you automatically have scaling, high-availability and low latencys for your users. Of course, your pages must be cachable, so session keys and personalised pages are a no-no. If designed carefully and with CDNs in mind, you can at least cache SOME content, like pics and videos and static stuff.
The load goes up, as other answers have mentioned.
You'll also get an influx of new users/blog comments/votes from bored folks who are only really interested in vandalism. This is mostly a problem for blogs which allow completely anonymous commenting, where some dreadful stuff will be entered. The blog platform might have spam filters sufficient to block it, but manual intervention is frequently required to clean up remaining drivel.
Even a little barrier to entry, like requiring a user name or email address even if no verification is done, will dramatically reduce the volume of the vandalism.