SyncVar without NetworkServer.Spawn - unity3d

I have a somewhat complex tree of objects that is generated and configured at runtime. Since the information required to do this is available to both the server and the client, I would like to have both sides generate the objects independently and then link the networked parts up for syncing afterwards.
That is to say, I need a way to make SyncVar work for an object that exists on the server and client but was not originally spawned via NetworkServer.Spawn. Is there a way to manually configure NetworkIdentity such that the Unity networking system understands that something is the same object?
I have the ability to uniquely identify these objects across the network myself, I just need a way to communicate that to Unity. NetworkIdentity.netId is readonly, so that's not an option.

If you make all the initialisation done purely by the server and then pushed to the clients you remove the need to sync afterwards. This also would remove the need to deal with duplicate information (which would ultimately be wasted CPU time at client end).
However, if you are wanting to have clients create the data as well, then I would suggest you have them send appropriate messages to the server with their data, the server can then create the objects for them.
Setup message handlers with NetworkServer.RegisterHandler on the server instance for each type of message you need it to handle,
public enum netMessages{
hello = 101,
goodbye = 102,
action = 103,
}
...
NetworkServer.RegisterHandler((short)netMessages.hello, new NetworkMessageDelegate(hdl_hello));
NetworkServer.RegisterHandler((short)netMessages.goodbye, new NetworkMessageDelegate(hdl_goodbye));
...
private void hdl_hello (NetworkMessage msg){
nmgs_hello m = msg.ReadMessage<nmgs_hello>();
...
}
and use the Send method of NetworkClient to send messages to the server.
You will also need to define message classes based on MessageBase for the actual messages.
public class nmsg_hello : MessageBase {
public int x;
public float welcomeness;
}
NOTE: Make sure you don't base any of your network messages off each other, seems to be bug/feature in Unity (at least the last time I tried it) where it doesn't work if your message is derived from anything other than MessageBase as it's immediate ancestor.

Related

Service Fabric, determine if specific actor exists

We are using Azure Service Fabric and are using actors to model specific devices, using the id of the device as the ActorId. Service Fabric will instantiate a new actor instance when we request an actor for a given id if it is not already instantiated, but I cannot seem to find an api that allows me to query if a specific device id already has an instantiated actor.
I understand that there might be some distributed/timing issues in obtaining the point-in-time truth but for our specific purpose, we do not need a hard realtime answer to this but can settle for a best guess. We would just like to, in theory, contact the current primary for the specific partition resolved by the ActorId and get back whether or not the device has an instantiated actor.
Ideally it is a fast/performant call, essentially faster than e.g. instantiating the actor and calling a method to understand if it has been initialized correctly and is not just an "empty" actor.
You can use the ActorServiceProxy to iterate through the information for a specific partition but that does not seem to be a very performant way of obtaining the information.
Anyone with insights into this?
The only official way you can check if the actor has been activated in any Service Partition previously is using the ActorServiceProxy query, like described here:
IActorService actorServiceProxy = ActorServiceProxy.Create(
new Uri("fabric:/MyApp/MyService"), partitionKey);
ContinuationToken continuationToken = null;
do
{
PagedResult<ActorInformation> page = await actorServiceProxy.GetActorsAsync(continuationToken, cancellationToken);
var actor = page.Items.FirstOrDefault(x => x.ActorId == idToFind);
continuationToken = page.ContinuationToken;
}
while (continuationToken != null);
By the nature of SF Actors, they are virtual, that means they always exist, even though you didn't activated then previously, so it make a bit harder to do this check.
As you said, it is not performant to query all actors, so, the other workarounds you could try is:
Store the IDs in a Reliable Dictionary elsewhere, every time an Actor is activated you raise an event and insert the ActorIDs in the Dictionary if not there yet.
You can use the OnActivateAsync() actor event to notify it's creation, or
You can use the custom actor factory in the ActorService to register actor activation
You can store the dictionary in another actor, or another StatefulService
Create a property in the actor that is set by the actor itself when it is activated.
The OnActivateAsync() check if this property has been set before
If not set yet, you set a new value and store in a variable (a non persisted value) to say the actor is new
Whenever you interact with actor you set this to indicate it is not new anymore
The next activation, the property will be already set, and nothing should happen.
Create a custom IActorStateProvider to do the same as mentioned in the option 2, instead of handle it in the actor it will handle a level underneath it. Honestly I think it is a bit of work, would only be handy if you have to do the same for many actor types, the option 1 and 2 would be much easier.
Do as Peter Bons Suggested, store the ActorID outside the ActorService, like in a DB, I would only suggest this option if you have to check this from outside the cluster.
.
The following snipped can help you if you want to manage these events outside the actor.
private static void Main()
{
try
{
ActorRuntime.RegisterActorAsync<NetCoreActorService>(
(context, actorType) => new ActorService(context, actorType,
new Func<ActorService, ActorId, ActorBase>((actorService, actorId) =>
{
RegisterActor(actorId);//The custom method to register the actor if new
return (ActorBase)Activator.CreateInstance(actorType.ImplementationType, actorService, actorId);
})
)).GetAwaiter().GetResult();
Thread.Sleep(Timeout.Infinite);
}
catch (Exception e)
{
ActorEventSource.Current.ActorHostInitializationFailed(e.ToString());
throw;
}
}
private static void RegisterActor(ActorId actorId)
{
//Here you will put the logic to register elsewhere the actor creation
}
Alternatively, you could create a stateful DeviceActorStatusActor which would be notified (called) by DeviceActor as soon as it's created. (Share the ActorId for correlation.)
Depending on your needs you can also register multiple Actors with the same status-tracking actor.
You'll have great performance and near real-time information.

Why are static GWT fields not transferred to the client?

ConfigProperty.idPropertyMap is filled on the server side. (verified via log output)
Accessing it on the client side shows it's empty. :-( (verified via log output)
Is this some default behaviour? (I don't think so)
Is the problem maybe related to the inner class ConfigProperty.IdPropertyMap, java.util.HashMap usage, serialization or some field access modifier issue?
Thanks for your help
// the transfer object
public class ConfigProperty implements IsSerializable, Comparable {
...
static public class IdPropertyMap extends HashMap
implements IsSerializable
{
...
}
protected static IdPropertyMap idPropertyMap = new IdPropertyMap();
...
}
// the server service
public class ManagerServiceImpl extends RemoteServiceServlet implements
ManagerService
{
...
public IdPropertyMap getConfigProps(String timeToken)
throws ConfiguratorException
{
...
}
}
added from below after some good answers (thanks!):
answer bottom line: static field sync is not implemented/supported currently. someone/me would have to file a feature request
just my perspective (an fallen-in-love newby to GWT :-)):
I understand pretty good (not perfect! ;-)) the possible implications of "global" variable syncing (a dependency graph or usage of annotations could be useful).
But from a new (otherwise experienced Java EE/web) user it looks like this:
you create some myapp.shared.dto.MyClass class (dto = data transfer objects)
you add some static fields in it that just represent collections of those objects (and maybe some other DTOs)
you can also do this on the client side and all the other static methods work as well
only thing not working is synchronization (which is not sooo bad in the first place)
BUT: some provided annotation, let's say #Transfer static Collection<MyClass> myObjList; would be handy, since I seem to know the impact and benefits that this would bring.
In my case it's rather simple since the client is more static, but would like to have this data without explicitely implementing it if the GWT framework could do it.
static variables are purely class variable It has nothing to do with individual instances. serialization applies only to object.
So ,your are getting always empty a ConfigProperty.idPropertyMap
The idea of RPC is not that you can act as though the client and the server are exactly the same JVM, but that they can share the objects that you pass over the wire. To send a static field over the wire, from the server to the client, the object stored in that field must be returned from the RPC method.
Static properties are not serialized and sent over the wire, because they do not belong to a single object, but to the class itself.
public class MyData implements Serializable {
protected String name;//sent over the wire, each MyData has its own name
protected String key;
protected static String masterKey;//All objects on the server or client
// share this, it cannot be sent over RPC. Instead, another RPC method
// could access it
}
Note, however, that it will only be that one instance which will be shared - if something else on the server changes that field, all clients which have asked for a copy will need to be updated

GWT: Is it OK to edit the same proxy multiple times?

I'm using GWT 2.4 with RequestFactory but not still everything is clear for me.
In this article author wrote about situation when we used an entity proxy with one instance of RequestContext and want to reuse (edit()) this entity proxy with other instance of RequestContext:
It cannot be edited because it has already a requestContext assigned.
If you want to change it you must retrieve instance of this entity
from server again
But I'm getting no exceptions when I execute this code:
RequestContext newRequest1 = factory.myRequest();
newRequest1.edit(proxy);
RequestContext newRequest2 = factory.myRequest();
newRequest2.edit(proxy);
The problems (exception) described by autor pop up when I run this version:
RequestContext newRequest1 = factory.myRequest();
MyProxy edited = newRequest1.edit(proxy);
RequestContext newRequest2 = factory.myRequest();
newRequest2.edit(edited);
So it seems that only editable copy returned by edit() is directly related with RequestContext instance.
In that case is there something wrong in approoach in which I keep one instance of (uneditable/frozen) proxy in my edit view and each time user clicks "edit" button I edit() it with new fresh RequestContext? Or should I obtain fresh instance of proxy each time too?
Getting new instance of proxy seems a bit awkward for me but I guess reusing one proxy instance may cause some issues related to sending delta of changes to server?
So to rephrase the question: it a good practice to reuse single instance of proxy with multiple RequestContexts?
There's no problem editing the same proxy twice (or more), as long as there's only a single editable instance at a time (your first code snippet should throw; if it's not then it's a bug; it could work if you don't keep references on both the RequestContext and the edited proxy).
Note that RequestFactory sends only the modified properties to the server, but it does so by diff'ing with the non-editable instance passed to edit(); so you should try to use the most recent instance as possible to keep your server-side/persisted data as close to your client-side data as possible (could seem obvious, but can lead to some surprises in practice: if you see foo on the client but have bar on the server, you'll keep the bar on the server-side until you modify the property on the client-side to something other than foo)

What gets send to the server with request factory

I have problem to understand what does Request factory send to server. I have a method
Request<NodeProxy> persist(NodeProxy node)
NodeProxy is an Object from tree like structure (has child nodes and one parent node, all of type NodeProxy). I'v change only one attribute in the node and called persists.
The question now is what gets send to the server?
In the dock here https://developers.google.com/web-toolkit/doc/latest/DevGuideRequestFactory
there is:
"On the client side, RequestFactory keeps track of objects that have been modified and sends only changes to the server, which results in very lightweight network payloads."
In the same dock, in the chapter Entity Relationships, there is also this:
"RequestFactory automatically sends the whole object graph in a single request."
And I'm wondering how should I understand this.
My problem:
My tree structure can get quete big, lets say 50 nodes. The problem is that for update of one attribute the method
public IEntity find(Class<? extends IEntity> clazz, String id)
in the class
public class BaseEntityLocator extends Locator<IEntity, String>
gets called for each object in the graph which is not acceptable.
Thank you in advance.
The problem you're facing is that RequestFactory automatically edit()s proxies when getting properties, and there's a bug when constructing the request payload that makes the whole graph of proxies to be implicitly edited that way, even if you didn't call the getter yourself.
That bug has many repercussions, including false-positives in RequestContext's isChanged(): http://code.google.com/p/google-web-toolkit/issues/detail?id=5952
I have great hopes that this will be fixed in GWT 2.5 (due in the next weeks).

Class Design: Demeter vs. Connection Lifetimes

Okay, so here's a problem I'm running into.
I have some classes in my application that have methods that require a database connection. I am torn between two different ways to design the classes, both of which are centered around dependency injection:
Provide a property for the connection that is set by the caller prior to method invocation. This has a few drawbacks.
Every method relying on the connection property has to validate that property to ensure that it isn't null, it's open and not involved in a transaction if that's going to muck up the operation.
If the connection property is unexpectedly closed, all the methods have to either (1.) throw an exception or (2.) coerce it open. Depending on the level of robustness you want, either case is appropriate. (Note that this is different from a connection that is passed to a method in that the reference to the connection exists for the lifetime of the object, not simply for the lifetime of the method invocation. Consequently, the volatility of the connection just seems higher to me.)
Providing a Connection property seems (to me, anyway) to scream out for a corresponding Transaction property. This creates additional overhead in the documentation, since you'd have to make it fairly obvious when the transaction was being used, and when it wasn't.
On the other hand, Microsoft seems to favor the whole set-and-invoke paradigm.
Require the connection to be passed as an argument to the method. This has a few advantages and disadvantages:
The parameter list is naturally larger. This is irksome to me, primarily at the point of call.
While a connection (and a transaction) must still be validated prior to use, the reference to it exists only for the duration of the method call.
The point of call is, however, quite clear. It's very obvious that you must provide the connection, and that the method won't be creating one behind your back automagically.
If a method doesn't require a transaction (say a method that only retrieves data from the database), no transaction is required. There's no lack of clarity due to the method signature.
If a method requires a transaction, it's very clear due to the method signature. Again, there's no lack of clarity.
Because the class does not expose a Connection or a Transaction property, there's no chance of callers trying to drill down through them to their properties and methods, thus enforcing the Law of Demeter.
I know, it's a lot. But on the one hand, there's the Microsoft Way: Provide properties, let the caller set the properties, and then invoke methods. That way, you don't have to create complex constructors or factory methods and the like. Also, avoid methods with lots of arguments.
Then, there's the simple fact that if I expose these two properties on my objects, they'll tend to encourage consumers to use them in nefarious ways. (Not that I'm responsible for that, but still.) But I just don't really want to write crappy code.
If you were in my shoes, what would you do?
Here is a third pattern to consider:
Create a class called ConnectionScope, which provides access to a connection
Any class at any time, can create a ConnectionScope
ConnectionScope has a property called Connection, which always returns a valid connection
Any (and every) ConnectionScope gives access to the same underlying connection object (within some scope, maybe within the same thread, or process)
You then are free to implement that Connection property however you want, and your classes don't have a property that needs to be set, nor is the connection a parameter, nor do they need to worry about opening or closing connections.
More details:
In C#, I'd recommend ConnectionScope implement IDisposable, that way your classes can write code like "using ( var scope = new ConnectionScope() )" and then ConnectionScope can free the connection (if appropriate) when it is destroyed
If you can limit yourself to one connection per thread (or process) then you can easily set the connection string in a [thread] static variable in ConnectionScope
You can then use reference counting to ensure that your single connection is re-used when its already open and connections are released when no one is using them
Updated: Here is some simplified sample code:
public class ConnectionScope : IDisposable
{
private static Connection m_Connection;
private static int m_ReferenceCount;
public Connection Connection
{
get
{
return m_Connection;
}
}
public ConnectionScope()
{
if ( m_Connection == null )
{
m_Connection = OpenConnection();
}
m_ReferenceCount++;
}
public void Dispose()
{
m_ReferenceCount--;
if ( m_ReferenceCount == 0 )
{
m_Connection.Dispose();
m_Connection = null;
}
}
}
Example code of how one (any) of your classes would use it:
using ( var scope = new ConnectionScope() )
{
scope.Connection.ExecuteCommand( ... )
}
I would prefer the latter method. It sounds like your classes use the database connection as a conduit to the persistence layer. Making the caller pass in the database connection makes it clear that this is the case. If the connection/transaction were represented as a property of the object, then things are not so clear and all of the ownership and lifetime issues come out. Better to avoid them from the start.