Are Access Point (AP) and Master Mode the same thing? - raspberry-pi

Is Access Point mode and Master mode the same thing? Looking at the manual it seems correct.

The actual mode is referred to as Master mode. Devices that are capable of Master mode are commonly referred to as Access Points. Because of this, there is some intermixing of the names.
References
hak5
ubuntu

Related

Setting trainer resistance using Swifty Sensors and Wahoo's cycling power service extension

I'm using the SwiftySensors CocoaPod to connect to a Wahoo Smart Trainer. It's advertising CyclingPowerService and DeviceInformationService. I've been able to get speed and power values without issue. Wahoo apparently extended the CyclingPowerService standard to allow setting resistance via that service instead of the Fitness Machine Control service.
https://github.com/codeinversion/sensors-swift links out to another Github page dealing with that extension, but that link is broken.
My question is: how should I go about setting the trainer's resistance? Wahoo's app can do it, so the machine is equipped for it. This is the only time I need to change the trainer's settings. Otherwise, I'm just reading sent information and SwiftySensors works great.
I've referenced the following post: Writing BLE to Cycling Control Point - Adding Resistance. Someone there said using CyclingPowerService to set resistance was possible without offering any guidance. I'm not very experienced with Bluetooth, so any information would be great!
Thank you Jordan. That was the answer. The broken link I referenced must have been pointing to the following repo: https://github.com/WahooFitness/sensors-swift-trainers
The following instructions assume that you're already able to connect to the trainer to receive data from it, like speed and power, using the SwiftySensors CocoaPod and the CyclingPowerService. Using the repo linked above, I was able to set the resistance to the Wahoo Snap trainer. Note that after you install that new repo, before you start scanning for sensors to connect to, you need to call
CyclingPowerService.WahooTrainer.activate()
From there, you set the resistance with
if let wahooTrainer = cyclingSpeedService.wahooTrainer {
wahooTrainer.setResistanceMode(resistance: 0.5)
}
The resistance is set using percentages. The value for resistance will be a Float, somewhere between 0 and 1.

How to enable fire and forget mode in moxi

I'm using moxi as a proxy for memcache cluster. In documentation I found that:
it supports fire-and-forget work tasks.
So, "SET" should return SUCCESS immediately without waiting memcache response.
But I didn't find how to enable it! I tried to google, tried to read source code. It didn't help.
So, does anyone know how to enable this mode? Or is it enabled by default?
Memcached has quiet commands, but I don't think they are implemented very frequently by SDK's (except for use in multi-get/set operations). These are likely what you need to use in order to enable this behavior. It is not a moxi setting, but a set of client commands that you would use.
Binary quiet commands (They end with a Q for quiet)
https://github.com/memcached/memcached/blob/master/protocol_binary.h#L98

Do I need 'taint' mode for an internal website?

The medium-sized internal-only website that I came in to support has about 1/2 the *.cgi files without 'taint' mode. Do I need 'taint' mode for an internal website?
Do you trust the internal users? If not, then yes.
Let's say you do trust your internal users and don't need taint at the moment. You could consider leaving taint ON in any existing scripts, if only to train yourself in how to use taint. It's not as bad as it feels at first, kind of like walking on coals. Gets better.
I can say that I've had more than one 'internal' website suddenly (requirements changed) become customer facing, exposed to the internet, and needing better security.
Another thing to keep in mind is that internal users are sometimes the most disgruntled and most likely to want to hurt your organization is some petty way.

How to stop users from visiting staging area after production deployment

We have a few servers that have different roles. For instance, we have production servers, and testing/staging servers. We have a few end users who forget to switch paths to production once things are tested and approved or use; They use the new paths for a bit, then revert back to using the testing/staging at some point for some reason that we can't understand other than stupidity. We still want to be able to get a glimpse into our staging environment after pushing a build into production, but we want to stop them from being able to still hit those servers/services.
We are now pondering some solutions to this problem. One being never give them the direct staging url. An idea would be to create a virtual directory or have a set of domain aliases that we could give them and then shut down while still allowing us access to these endpoints. We could restrict our main staging domain to the office ip range so they never have direct access and call it good.
Does this sound like a good solution? Is our process wrong, are there better routes?
I am interested in solutions for websites as well as web services where visuals can't be used effectively.
We've run into this at my work as well… quite recently in fact. One thing that I thought about other than the virtual directory was setting up specific ports for them to test on then either take the ports down or change them for our internal uses only.
Well without details in how your application is deployed it could be troublesome to give concrete examples. One wonderful solution is to get better users :P Perhaps a more possible solution however is to let your production boxes move a certain set of users(as decided in your code) to your test/staging systems. I.E. the User always connects to Production, but the production machines at connect/auth time, may decide these people are too cool for production let them run the test/staging code instead.
It's not a fullproof method of course, but it works for many many websites to let a certain set of users into different parts of their codebase.
I don't know how feasible this would be for you, but it's a possibility perhaps.
I find that users sometimes have difficulties with URLs, and don't like to have subtle changes like port number in the address.
The best approach I've found is to have the application tell the user what environment they are in.
For example, my teams have used absolutely positioned headers or footers, color coded for Dev/Staging environments that show the application version number with an alpha/beta tag, along with a message that says "Work done on this site will be lost, use Production (link) to keep your work." Typically we make the Dev area red, and the staging area yellow. We also like to put a link to the bug tracking system right in this area.
On production there is not usually a region like this. However, we do sometimes provide positive reinforcement by placing a green region, with the app version and a Production tag in it, and then fade the green region away after a few seconds. This helps keep the app front and center, but let's the user know they are in the right place.

pgpool-II for Postgres - Is it what I need?

I just stumbled upon pgpool-II in my search for clustering my Postgres DB (just getting ready to deploy a web app in a couple months). I still have the shakes from excitement, but I'm nervous, as each time I find something this excellent I am soon let down. Have you any experience with pgpool-II, and will it help me run my database in multiple VMs, and later in multiple physical servers altogether? Is it all I need for backing up, load balancing, and providing a higher availability for my DB server!?
Also, is it easy to use the parallel query function (for instance, in Django or through Pythons psycopg2)? This would be most excellent for providing reporting and aggregation!
One last thing: It seems to work between Postgres and psycopg2. Is this a correct understanding of it, so I can use psycopg2 the same as normal, without regard for pgpool-II?
pgpool-II works fine for what it claims to do. And it fits between your application and the database the way you expect it to; just point psycopg2 toward it instead of directly at the database and off you go.
The main thing you have to note is that while it supports many different types of features--replication, load balancing, parallel query--you can't use them all at once. It sounds like you may be under the impression you can do that, and it doesn't work that way. The documentation is not all that clear on this subject (the English version at least, I can't speak to the original Japanese one).
For example, if you run pgpool-II in its "Master/Slave" mode, so that it supports load-balancing for scaling reads, you have to use another program to actually do the replication between those nodes. Slony was the supported replication solution to put underneath of there in earlier PostgreSQL versions, as of pgpool-II 3.0 and PostgreSQL 9.0 you can also use the soon to be released Streaming Replication/Hot Standby features of that new version as well.
pgpool-II is a useful component and you can use it in a lot of interesting ways, but I doubt it will be "all you need" for every requirement you hope to achieve with it.