How can I scale Botkit application - botkit

I new to botkit. Still exploring the framework.
How does the botkit scale? Can I set up multiple server and route request randomly? Will the context be preserved if each user requests end up in different server?

Saving the session into dbs is the only way to go ahead, Botkit does not support it by default.
There are other people trying to solve similar problem, this might help https://github.com/howdyai/botkit/issues/251

Related

RethinkDB - How to stream data to the browser

Context
Greetings,
One day I randomly found RethinkDB and I was really fascinated by the whole real-time changes thing. In order to learn how to use this tool I quickly spinned up a container running RethinkDB and i started making a small project. I wanted to make something very simple therefore i thought about creating a service in which speakers can create room and the audience can ask questions. Other users can upvote questions in order to let the speaker know which one are the best. Obviously this project has a lot of realtime needs that i believe are best satisfied by using RethinkDB.
Design
I wanted to use a vary specific set of tools for this. The backend would be made in Laravel Lumen, the frontend in Vue.JS and the database of course would be RethinkDB.
The problem
RethinkDB as it seems is not designed to be exposed to the end user directly despite the fact that no security concern exists.
Assuming that the user only needs to see the questions and the upvoted in real time, no write permissions are needed and if a user changed the room ID nothing bad will happen since the rooms are all publicly accessible.
Therefore something is needed in order to await data updates and push it through a socket to the client (socket.io for example or pusher).
Given the fact that the backend is written in PHP i cannot tell Lumen to stay awake and wait for data updates. From what i have seen from the online tutorials a secondary system should be used that should listen for changes and then push them. (lets say a node.js service for example)
This is understandable however i strongly believe that this way of transferring the data to the user is inefficient and it defeats the purpose of RethinkDB.
If I have to send the action from the client's computer (user asks a question), save it to the database, have a script that listens for changes, then push the changes to socket.io and finally have the client (vue.js) act when a new event arrives, what is the point of having a real-time database in the first place?
I could avoid all this headache simply by having the Lumen app push the event directly to socket.io and user any other database system instead.
I really cant understand the point of all this. I am not experienced with no-sql databases by any means but i really want to experiment with them.
Thank you.
This is understandable however i strongly believe that this way of transferring the data to the user is inefficient and it defeats the purpose of RethinkDB.
RethinkDB has no built in mechanism to transfer data to end-users. It has no access control (in the conventional sense) as well. The common way, like you said, is to spin up one / multiple node instance(s) running socket.io. On each instance you can listen on your RethinkDB change streams and use socket.io's broadcast functionality. This would be a common way, but as RethinkDB's streams are pretty optimized, you could also open a change stream for every incoming socket.io connection.

Using NeoLoad to test ZK application

I am trying to use NeoLoad 5.2 to record test scenario for ZK application.
Unfortunately, it looks like some operations are not recorded. For example:
Login and password of the login form are not shown among requests
Population of combo boxes is not shown
I prepared ZK app to generate repeatable components and desktops ids.
Does somebody has such experience? Should I configure NeoLoad or ZK application in some special way to record all the data exchange which happens?
We had a similar issue that the recording did not have any source (empty body). Luckily the play back did have returned code which we used for correlation and validation. Does seem to be a bug in the toolset but we bypassed the issue.

Send server information to client

Last semester we had to develop the game Ludo in JavaScript and HTML/CSS. That was pretty easy. Now we have to develop a backend with GWT (Java) to create a multiplayer game. Sadly, we haven’t got much information on how to develop with GWT and the exercise is quite difficult at the beginning.
At the moment I am trying to create a kind of lobby where different players can join.
My idea was to use some input fields, where the player could enter his name and join the lobby. But I don’t know how to give the other clients the information that a new player has joined.
I created an asynchronous interfaces (RPC) where a player could submit his name to the server (Like this example). This works ok. But how should I share this information? Our docent said we should use JSON to share information’s, but I don’t know how this should help in this situation.
Is there a way to send information’s to the clients? I read a lot and just find to use additional libraries as gwt-comet.
I have really now clue how I could go on. I’m thankful for every help and information!
Greetz
You have two options: push and pull.
"Pull" option:
Other players get required information when they join the lobby and/or do something else. You can also schedule to pull this information periodically (like once every 10 minutes). You can use the same RPC mechanism to get data from server to a client. "Pull" means that a client initiates the request and server responds with the information.
"Push" option:
When a new player joins, the server pushes this new data to all other players. The best solution depends on your game implementation. Comet is a good option, as Jean-Michel mentioned, but it's more complicated and "expensive" from resources point of view. You should use this option if you need real-time status updates for your game.
I would suggest Errai and ErraiBus in particular. From Java perspective you are only sending some events via event bus (observer GoF pattern) and all the magic with Ajax Push is happening behind the scenes.

Using HTTPS and multiple NSURLProtectionSpace's in iOS

I'm creating a iOS app that requires the user to log in at startup, and then uses those credentials to query 4-5 different services on a server over the course of the session.
The server (xyz) it self doesn't accept the credentials, but if the services that it provides are queried then they get accepted. For example https://xyz/service1 works, https://xyz doesn't.
Now what I'm wondering about is if there is anything that stands in the way of creating 4-5 NSURLProtectionSpace's at log in, one for each service on the server, and then use the corresponding protection space when use each service?
Or is there a better way of implementing something that could work in this situation?
All help would be appreciated.
Turns out that there is nothing that stands in the way of creating multiple NSURLProtectionSpace's since each is created for a separate url.

Keeping iPhone application in sync with GWT application

I'm working on an iPhone application that should work in offline and online modes.
In it's online mode it's supposed to feed all the information the user enters to a webservice backed by GWT/GAE.
In it's offline mode it's supposed to store the information locally, and when connection is available sync it up to the web service.
Currently my plan is as follows:
Provide a connection between an app and a webservice using Protobuffers for efficient over-the-wire communication
Work with local DB using Core Data
Poll the network status, and when available sync the database and keep some sort of local-db-to-remote-db key synchronization.
The question is - am I in the right direction? Are the standard patterns for implementing this? Maybe someone can point me to an open-source application that works in a similar fashion?
I am really new to iPhone coding, and would be very glad to hear any suggestions.
Thanks
I think you've blurring the questions together.
If you've got a question about making a GWT web interface, that's one question.
Questions about how to sync an iPhone to a web service are a different question. For that, you don't want to use GWT's RPCs for syncing, as you'd have to fake out the 'browser-side' of the serialization system in your iPhone code, which GWT normally provides for you.
about system design direction:
First if there is no REAL need do not create 2 different apps one GWT and other iPhone
create one but well written GWT app. It will work off line no problem and will manage your data using HTML feature -- offline application cache
If it a must to create 2 separate apps
than at least save yourself effort and do not write server twice as if you go with standard GWT aproach you will almost sertanly fail to talk to server from stand alone app (it is zipped JSON over HTTP with some tricky headers...) or will write things twise so look in to the RestLet library it well supported by the GAE.
About the way to keep sync with offline / online switching:
There are several aproaches to consider and all of them are not perfect. So when you conseder yours think of what youser expects... Do not be Microsoft Word do not try to outsmart the user.
If there at least one scenario in the use cases that demand user intervention to merge changes (And there will be - take it to the bank) - than you will have implement UI for this - than there is a good reason to use it often - user will get used to it. it better than it will see it in a while since he started to use the app because a need fro it is rare because you implemented a super duper merging logic that asks user only in very special cases... Don't do it.
balance the effort. Because the mess that a bug in such code will introduce to user is much more painful than the benefit all together.
so the HOW:
The one way is the Do-UnDo way.
While off line - keep the log of actions user did on data in timed order user did them
as soon as you connected - send to server and execute them. Same from server to client.
Will work fine in most cases as long as you are not writing a Photoshop kind of software with huge amounts of data per operation. Also referred as Action Pattern by the GangOfFour.
Another way is a source control way. - Versions and may be even locks. very application dependent. DBMS internally some times use it for transactions implementations.
And there is always an option to be Read Only when Ofline :-)
Wonder if you have considered using a Sync Framework to manage the synchronization. If that interests you can take a look at the open source project, OpenMobster's Sync service. You can do the following sync operations
two-way
one-way client
one-way device
bootup
Besides that, all modifications are automatically tracked and synced with the Cloud. You can have your app offline when network connection is down. It will track any changes and automatically in the background synchronize it with the cloud when the connection returns. It also provides synchronization like iCloud across multiple devices
Also, modifications in the Cloud are synched using Push notifications, so the data is always current even if it is stored locally.
Here is a link to the open source project: http://openmobster.googlecode.com
Here is a link to iPhone App Sync: http://code.google.com/p/openmobster/wiki/iPhoneSyncApp