I'm trying to make a Photon Bolt game that connects two devices. The problem is that the Client tends to get disconnected a lot, an it doesn't reconnect automatically. I've tried using methods like ReconnectAndRejoin, but it seems like it only works in PUN. Right now I'm using this custom solution, without success:
[BoltGlobalBehaviour(BoltNetworkModes.Client)]
public class InitialiseGameClient : Photon.Bolt.GlobalEventListener
{
private bool disconnected;
public void Update(){
if(disconnected){
Reconnect();
}
}
public override void Disconnected(BoltConnection connection)
{
disconnected = true;
}
public void Reconnect(){
BoltLauncher.StartClient();
PlayerPrefs.DeleteAll();
if (BoltNetwork.IsRunning && BoltNetwork.IsClient)
{
foreach (var session in BoltNetwork.SessionList)
{
UdpSession udpSession = session.Value as UdpSession;
if (udpSession.Source != UdpSessionSource.Photon)
continue;
PhotonSession photonSession = udpSession as PhotonSession;
string sessionDescription = String.Format("{0} / {1} ({2})",
photonSession.Source, photonSession.HostName, photonSession.Id);
RoomProtocolToken token = photonSession.GetProtocolToken() as RoomProtocolToken;
if (token != null)
{
sessionDescription += String.Format(" :: {0}", token.ArbitraryData);
}
else
{
object value_t = -1;
object value_m = -1;
if (photonSession.Properties.ContainsKey("t"))
{
value_t = photonSession.Properties["t"];
}
if (photonSession.Properties.ContainsKey("m"))
{
value_m = photonSession.Properties["m"];
}
sessionDescription += String.Format(" :: {0}/{1}", value_t, value_m);
}
ServerConnectToken connectToken = new ServerConnectToken
{
data = "ConnectTokenData"
};
Debug.Log((int)photonSession.Properties["t"]);
var propertyID = PlayerPrefs.GetInt("PropertyID", 2);;
if((int)photonSession.Properties["t"] == propertyID){
BoltMatchmaking.JoinSession(photonSession, connectToken);
disconnected = false;
}
}
}
}
}
With this method I'm trying to use the same code used to connect the the client for the first time in the reconnect function, and keep trying until the client manages to connect. However it seems that the code never executes, even if the disconnect function gets triggered (the reconnect doesn't). Is there any Bolt integrated function that helps with reconnecting? Thanks in advance.
You need to shutdown bolt, then try reconnecting. Even if you don't get the below exception, it's just an example and you should shutdown and do BoltLauncher.StartClient() etc.
BoltException: Bolt is already running, you must call BoltLauncher.Shutdown() before starting a new instance of Bolt.
Related
I'm using Unity Mirror for networking my app so that a central server (not host) can send commands to the clients it is connected to.
On play, the "game" will make the program a client or server automatically on play, so i don't have to use the Client/Server only buttons provided by the NetworkManagerHUD.
Currently I'm facing 2 problems:
Client disconnects right after a connection with server is made. When I override the OnClientConnect function, I put the line base.OnClientConnect(conn). After stepping into the original function, I conclude it is the autoCreatePlayer set to true that is causing this problem. (the server and client are two instances of the program running on the same computer as I can only test using localhost).
public override void OnClientConnect(NetworkConnection conn)
{
base.OnClientConnect(conn); //This line causes the error message
clientConnected = true;
GameObject[] prefabs = Resources.LoadAll<GameObject>("NetworkingComponents");
foreach (var prefab in prefabs)
{
NetworkClient.RegisterPrefab(prefab);
}
GameObject[] gos = Resources.LoadAll<GameObject>("NetworkingComponents");
}
Perhaps the most critical issue. Referring to the previous problem, if i did remove the line
base.OnClientConnect(conn), client can connect, but all networked gameobjects (with NetworkIdentity) are still not showing up when connected as client, even though the NetworkManagerHUD says the program is connected as client. (Strangely, they are showing up if connected as Server.)
Here is the rest of the overriden NetworkManager code.
public class MyNetworkManager : NetworkManager
{
public GameObject dropdown;
public Canvas canvas;
//---------------------------Networking stuff----------------------------------
public List<NetworkNode> networkedNodes { get; } = new List<NetworkNode>();
public List<Settings> networkedSettings { get; } = new List<Settings>();
public List<NetworkedVisualisersDisplay> visualisersDisplays { get; } = new List<NetworkedVisualisersDisplay>();
public List<Visualiser> visualisers{ get; } = new List<Visualiser>();
public static MyNetworkManager instance = null;
public NetworkedVisualisersDisplay visDisplayPrefab;
public NetworkNode networkNode;
private string homeName;
public volatile bool clientConnected = false;
public bool IsClientConnected()
{
return clientConnected;
}
//the purpose of having a delay is that we need to determine if the call to StartClient() actually started the player as a client. It could fail if it’s the first player instance on the network.
public IEnumerator DelayedStart()
{
//base.Start();
StartClient();
yield return new WaitForSeconds(2);
print("conn count " + NetworkServer.connections.Count);
if (!IsClientConnected())
{
NetworkClient.Disconnect();
print(“starting as server”);
StartServer();
clientConnected = false;
}
else
{
print("starting as client");
}
visDisplayPrefab = Resources.Load<NetworkedVisualisersDisplay>("NetworkingComponents/NetworkedVisualisersDisplay");
if (instance == null)
{
instance = this;
print("instance = " + this);
}
else
{
print("manager destroyed");
Destroy(gameObject);
}
yield return null;
}
//-----------------------------------------------------------------------------
public override void Start(){
StartCoroutine(DelayedStart());
}
public override void OnStartServer()
{
GameObject[] prefabs = Resources.LoadAll<GameObject>("NetworkingComponents");
foreach (var prefab in prefabs)
{
spawnPrefabs.Add(prefab);
}
}
public override void OnServerChangeScene(string scenename)
{
if (scenename.Equals("Visualisers"))
{
for (int i = 0; i < visualisersDisplays.Count; i++)
{
var conn = networkedNodes[i].connectionToClient;
NetworkedVisualisersDisplay visSceneInstance = Instantiate(visualisersDisplays[i]);
NetworkServer.Destroy(conn.identity.gameObject);
NetworkServer.ReplacePlayerForConnection(conn, visSceneInstance.gameObject);
}
}
else if (Settings.Instance.sceneNames.Contains(scenename))
{
for (int i = 0; i < visualisersDisplays.Count; i++)
{
var conn = visualisers[i].connectionToClient;
var visInstance = Instantiate(visualisers[i]);
NetworkServer.Destroy(conn.identity.gameObject);
NetworkServer.ReplacePlayerForConnection(conn, visInstance.gameObject);
}
}
}
public override void OnServerAddPlayer(NetworkConnection conn)
{
NetworkNode n = Instantiate(networkNode);
NetworkServer.AddPlayerForConnection(conn, n.gameObject);
NetworkNode.instance.DisplayMessage();
}
public override void OnClientConnect(NetworkConnection conn)
{
base.OnClientConnect(conn);
//we are connected as a client
clientConnected = true;
GameObject[] prefabs = Resources.LoadAll<GameObject>("NetworkingComponents");
foreach (var prefab in prefabs)
{
NetworkClient.RegisterPrefab(prefab);
}
}
}
Any help will be greatly appreciated!
In my processor API I store the messages in a key value store and every 100 messages I make a POST request. If something fails while trying to send the messages (api is not responding etc.) I want to stop processing messages. Until there is evidence the API calls work.
Here is my code:
public class BulkProcessor implements Processor<byte[], UserEvent> {
private KeyValueStore<Integer, ArrayList<UserEvent>> keyValueStore;
private BulkAPIClient bulkClient;
private String storeName;
private ProcessorContext context;
private int count;
#Autowired
public BulkProcessor(String storeName, BulkClient bulkClient) {
this.storeName = storeName;
this.bulkClient = bulkClient;
}
#Override
public void init(ProcessorContext context) {
this.context = context;
keyValueStore = (KeyValueStore<Integer, ArrayList<UserEvent>>) context.getStateStore(storeName);
count = 0;
// to check every 15 minutes if there are any remainders in the store that are not sent yet
this.context.schedule(Duration.ofMinutes(15), PunctuationType.WALL_CLOCK_TIME, (timestamp) -> {
if (count > 0) {
sendEntriesFromStore();
}
});
}
#Override
public void process(byte[] key, UserEvent value) {
int userGroupId = Integer.valueOf(value.getUserGroupId());
ArrayList<UserEvent> userEventArrayList = keyValueStore.get(userGroupId);
if (userEventArrayList == null) {
userEventArrayList = new ArrayList<>();
}
userEventArrayList.add(value);
keyValueStore.put(userGroupId, userEventArrayList);
if (count == 100) {
sendEntriesFromStore();
}
}
private void sendEntriesFromStore() {
KeyValueIterator<Integer, ArrayList<UserEvent>> iterator = keyValueStore.all();
while (iterator.hasNext()) {
KeyValue<Integer, ArrayList<UserEvent>> entry = iterator.next();
BulkRequest bulkRequest = new BulkRequest(entry.key, entry.value);
if (bulkRequest.getLocation() != null) {
URI url = bulkClient.buildURIPath(bulkRequest);
try {
bulkClient.postRequestBulkApi(url, bulkRequest);
keyValueStore.delete(entry.key);
} catch (BulkApiException e) {
logger.warn(e.getMessage(), e.fillInStackTrace());
}
}
}
iterator.close();
count = 0;
}
#Override
public void close() {
}
}
Currently in my code if a call to the API fails it will iterate the next 100 (and this will keep happening as long as it fails) and add them to the keyValueStore. I don't want this to happen. Instead I would prefer to stop the stream and continue once the keyValueStore is emptied. Is that possible?
Could I throw a StreamsException?
try {
bulkClient.postRequestBulkApi(url, bulkRequest);
keyValueStore.delete(entry.key);
} catch (BulkApiException e) {
throw new StreamsException(e);
}
Would that kill my stream app and so the process dies?
You should only delete the record from state store after you make sure your record is successfully processed by the API, so remove the first keyValueStore.delete(entry.key); and keep the second one. If not then you can potentially lost some messages when keyValueStore.delete is committed to underlying changelog topic but your messages are not successfully process yet, so it's only at most one guarantee.
Just wrap the calling API code around an infinite loop and keep trying until the record successfully processed, your processor will not consume new message from above processor node cause it's running in a same StreamThread:
private void sendEntriesFromStore() {
KeyValueIterator<Integer, ArrayList<UserEvent>> iterator = keyValueStore.all();
while (iterator.hasNext()) {
KeyValue<Integer, ArrayList<UserEvent>> entry = iterator.next();
//remove this state store delete code : keyValueStore.delete(entry.key);
BulkRequest bulkRequest = new BulkRequest(entry.key, entry.value);
if (bulkRequest.getLocation() != null) {
URI url = bulkClient.buildURIPath(bulkRequest);
while (true) {
try {
bulkClient.postRequestBulkApi(url, bulkRequest);
keyValueStore.delete(entry.key);//only delete after successfully process the message to achieve at least one processing guarantee
break;
} catch (BulkApiException e) {
logger.warn(e.getMessage(), e.fillInStackTrace());
}
}
}
}
iterator.close();
count = 0;
}
Yes you could throw a StreamsException, this StreamTask will be migrate to another StreamThread during re-balance, maybe on the sample application instance. If the API keep causing Exception until all StreamThread had died, your application will not automatically exit and receive below Exception, you should add a custom StreamsException handler to exit your app when all stream threads had died using KafkaStreams#setUncaughtExceptionHandler or listen to Stream State change (to ERROR state):
All stream threads have died. The instance will be in error state and should be closed.
In the end I used a simple KafkaConsumer instead of KafkaStreams, but the bottom line was that I changed the BulkApiException to extend RuntimeException, which I throw again after I log it. So now it looks as follows:
} catch (BulkApiException bae) {
logger.error(bae.getMessage(), bae.fillInStackTrace());
throw new BulkApiException();
} finally {
consumer.close();
int exitCode = SpringApplication.exit(ctx, () -> 1);
System.exit(exitCode);
}
This way the application is exited and the k8s restarts the pod. That was because if the api where I'm trying to forward the requests is down, then there is no point on continue reading messages. So until the other api is back up k8s will restart a pod.
I've got an issue, for which I am unable to post full code (sorry), due to security reasons. The gist of my issue is that I have a ServerBootstrap, created as follows:
bossGroup = new NioEventLoopGroup();
workerGroup = new NioEventLoopGroup();
final ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addFirst("idleStateHandler", new IdleStateHandler(0, 0, 3000));
//Adds the MQTT encoder and decoder
ch.pipeline().addLast("decoder", new MyMessageDecoder());
ch.pipeline().addLast("encoder", new MyMessageEncoder());
ch.pipeline().addLast(createMyHandler());
}
}).option(ChannelOption.SO_BACKLOG, 128).option(ChannelOption.SO_REUSEADDR, true)
.option(ChannelOption.TCP_NODELAY, true)
.childOption(ChannelOption.SO_KEEPALIVE, true);
// Bind and start to accept incoming connections.
channelFuture = b.bind(listenAddress, listenPort);
With createMyHandlerMethod() that basically returns an extended implementation of ChannelInboundHandlerAdapter
I also have a "client" listener, that listens for incoming connection requests, and is loaded as follows:
final String host = getHost();
final int port = getPort();
nioEventLoopGroup = new NioEventLoopGroup();
bootStrap = new Bootstrap();
bootStrap.group(nioEventLoopGroup);
bootStrap.channel(NioSocketChannel.class);
bootStrap.option(ChannelOption.SO_KEEPALIVE, true);
bootStrap.handler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addFirst("idleStateHandler", new IdleStateHandler(0, 0, getKeepAliveInterval()));
ch.pipeline().addAfter("idleStateHandler", "idleEventHandler", new MoquetteIdleTimeoutHandler());
ch.pipeline().addLast("decoder", new MyMessageDecoder());
ch.pipeline().addLast("encoder", new MyMessageEncoder());
ch.pipeline().addLast(MyClientHandler.this);
}
})
.option(ChannelOption.SO_REUSEADDR, true)
.option(ChannelOption.TCP_NODELAY, true);
// Start the client.
try {
channelFuture = bootStrap.connect(host, port).sync();
} catch (InterruptedException e) {
throw new MyException(“Exception”, e);
}
Where MyClientHandler is again a subclassed instance of ChannelInboundHandlerAdapter. Everything works fine, I get messages coming in from the "server" adapter, i process them, and send them back on the same context. And vice-versa for the "client" handler.
The problem happens when I have to (for some messages) proxy them from the server or client handler to other connection. Again, I am very sorry for not being able to post much code, but the gist of it is that I'm calling from:
serverHandler.channelRead(ChannelHandlerContext ctx, Object msg) {
if (msg instanceof myProxyingMessage) {
if (ctx.channel().isActive()) {
ctx.channel().writeAndFlush(someOtherMessage);
**getClientHandler().writeAndFlush(myProxyingMessage);**
}
}
}
Now here's the problem: the bolded (client) writeAndFlush - never actually writes the message bytes, it doesn't throw any errors. The ChannelFuture returns all false (success, cancelled, done). And if I sync on it, eventually it times out for other reasons (connection timeout set within my code).
I know I haven't posted all of my code, but I'm hoping that someone has some tips and/or pointers for how to isolate the problem of WHY it is not writing to the client context. I'm not a Netty expert by any stretch, and most of this code was written by someone else. They are both subclassing ChannelInboundHandlerAdapter
Feel free to ask any questions if you have any.
*****EDIT*********
I tried to proxy the request back to a DIFFERENT context/channel (ie, the client channel) using the following test code:
public void proxyPubRec(int messageId) throws MQTTException {
logger.log(logLevel, "proxying PUBREC to context: " + debugContext());
PubRecMessage pubRecMessage = new PubRecMessage();
pubRecMessage.setMessageID(messageId);
pubRecMessage.setRemainingLength(2);
logger.log(logLevel, "pipeline writable flag: " + ctx.pipeline().channel().isWritable());
MyMQTTEncoder encoder = new MyMQTTEncoder();
ByteBuf buff = null;
try {
buff = encoder.encode(pubRecMessage);
ctx.channel().writeAndFlush(buff);
} catch (Throwable t) {
logger.log(Level.SEVERE, "unable to encode PUBREC");
} finally {
if (buff != null) {
buff.release();
}
}
}
public class MyMQTTEncoder extends MQTTEncoder {
public ByteBuf encode(AbstractMessage msg) {
PooledByteBufAllocator allocator = new PooledByteBufAllocator();
ByteBuf buf = allocator.buffer();
try {
super.encode(ctx, msg, buf);
} catch (Throwable t) {
logger.log(Level.SEVERE, "unable to encode PUBREC, " + t.getMessage());
}
return buf;
}
}
But the above at line: ctx.channel().writeAndFlush(buff) is NOT writing to the other channel - any tips/tricks on debugging this sort of issue?
someOtherMessage has to be ByteBuf.
So, take this :
serverHandler.channelRead(ChannelHandlerContext ctx, Object msg) {
if (msg instanceof myProxyingMessage) {
if (ctx.channel().isActive()) {
ctx.channel().writeAndFlush(someOtherMessage);
**getClientHandler().writeAndFlush(myProxyingMessage);**
}
}
}
... and replace it with this :
serverHandler.channelRead(ChannelHandlerContext ctx, Object msg) {
if (msg instanceof myProxyingMessage) {
if (ctx.channel().isActive()) {
ctx.channel().writeAndFlush(ByteBuf);
**getClientHandler().writeAndFlush(myProxyingMessage);**
}
}
}
Actually, this turned out to be a threading issue. One of my threads was blocked/waiting while other threads were writing to the context and because of this, the writes were buffered and not sent, even with a flush. Problem solved!
Essentially, I put the first message code in an Runnable/Executor thread, which allowed it to run separately so that the second write/response was able to write to the context. There are still potentially some issues with this (in terms of message ordering), but this is not on topic for the original question. Thanks for all your help!
I have a windows application using SqlDependency running at separated thread pool, this application represents a log monitor UI get the latest rows added in a specific table in the database and view it in a DataGridView. You can see the application source code from this LINK, or follow this script.
const string tableName = "OutgoingLog";
const string statusMessage = "{0} changes have occurred.";
int changeCount = 0;
private static DataSet dataToWatch = null;
private static SqlConnection connection = null;
private static SqlCommand command = null;
public frmMain()
{
InitializeComponent();
}
private bool CanRequestNotifications()
{
// In order to use the callback feature of the
// SqlDependency, the application must have
// the SqlClientPermission permission.
try
{
SqlClientPermission perm = new SqlClientPermission(PermissionState.Unrestricted);
perm.Demand();
return true;
}
catch
{
return false;
}
}
private void dependency_OnChange(object sender, SqlNotificationEventArgs e)
{
// This event will occur on a thread pool thread.
// Updating the UI from a worker thread is not permitted.
// The following code checks to see if it is safe to
// update the UI.
ISynchronizeInvoke i = (ISynchronizeInvoke)this;
// If InvokeRequired returns True, the code
// is executing on a worker thread.
if (i.InvokeRequired)
{
// Create a delegate to perform the thread switch.
OnChangeEventHandler tempDelegate = new OnChangeEventHandler(dependency_OnChange);
object[] args = { sender, e };
// Marshal the data from the worker thread
// to the UI thread.
i.BeginInvoke(tempDelegate, args);
return;
}
// Remove the handler, since it is only good
// for a single notification.
SqlDependency dependency = (SqlDependency)sender;
dependency.OnChange -= dependency_OnChange;
// At this point, the code is executing on the
// UI thread, so it is safe to update the UI.
++changeCount;
lblChanges.Text = String.Format(statusMessage, changeCount);
// Reload the dataset that is bound to the grid.
GetData();
}
AutoResetEvent running = new AutoResetEvent(true);
private void GetData()
{
// Start the retrieval of data on another thread to let the UI thread free
ThreadPool.QueueUserWorkItem(o =>
{
running.WaitOne();
// Empty the dataset so that there is only
// one batch of data displayed.
dataToWatch.Clear();
// Make sure the command object does not already have
// a notification object associated with it.
command.Notification = null;
// Create and bind the SqlDependency object
// to the command object.
SqlDependency dependency = new SqlDependency(command);
dependency.OnChange += new OnChangeEventHandler(dependency_OnChange);
using (SqlDataAdapter adapter = new SqlDataAdapter(command))
{
adapter.Fill(dataToWatch, tableName);
try
{
running.Set();
}
finally
{
// Update the UI
dgv.Invoke(new Action(() =>
{
dgv.DataSource = dataToWatch;
dgv.DataMember = tableName;
//dgv.FirstDisplayedScrollingRowIndex = dgv.Rows.Count - 1;
}));
}
}
});
}
private void btnAction_Click(object sender, EventArgs e)
{
changeCount = 0;
lblChanges.Text = String.Format(statusMessage, changeCount);
// Remove any existing dependency connection, then create a new one.
SqlDependency.Stop("Server=.; Database=SMS_Tank_Log;UID=sa;PWD=hana;MultipleActiveResultSets=True");
SqlDependency.Start("Server=.; Database=SMS_Tank_Log;UID=sa;PWD=hana;MultipleActiveResultSets=True");
if (connection == null)
{
connection = new SqlConnection("Server=.; Database=SMS_Tank_Log;UID=sa;PWD=hana;MultipleActiveResultSets=True");
}
if (command == null)
{
command = new SqlCommand("select * from OutgoingLog", connection);
//SqlParameter prm =
// new SqlParameter("#Quantity", SqlDbType.Int);
//prm.Direction = ParameterDirection.Input;
//prm.DbType = DbType.Int32;
//prm.Value = 100;
//command.Parameters.Add(prm);
}
if (dataToWatch == null)
{
dataToWatch = new DataSet();
}
GetData();
}
private void frmMain_Load(object sender, EventArgs e)
{
btnAction.Enabled = CanRequestNotifications();
}
private void frmMain_FormClosing(object sender, FormClosingEventArgs e)
{
SqlDependency.Stop("Server=.; Database=SMS_Tank_Log;UID=sa;PWD=hana;MultipleActiveResultSets=True");
}
The problem:
I have many situations of errors, (images in the first comment)
(No. 1):
I got this error dialog, and I don't know its reason.
(No. 2):
I got nothing in my grid view (No errors, and no data).
(No. 3):
I got only columns names and no rows, although the table has rows.
I need help please.
I may be wrong but a DataSet does not seem to have notification capability so the DataGridView may be surprised if you change it behind its back.
You could try to explicitly show your're changing the data source by first setting it to null:
dgv.DataSource = null;
dgv.DataSource = dataToWatch;
dgv.DataMember = tableName;
It's worth a try...
I am stuck with the above issue. I got lot of solutions but none of them working for me.
Please find herewith my code
private void btnRunQuery_Click(object sender, EventArgs e)
{
try
{
Thread ProcessThread = new Thread(Process);
ProcessThread.Start();
Thread.CurrentThread.Join();
}
catch
{
Debug.WriteLine("Error in model creation");
Console.WriteLine("Error in model creation");
}
finally
{
//dsModel = null;
}
}
private void Process()
{
using (var dataContext = new IControlerDataContext())
{
dataContext.EnlistTransaction();
IItemPropertyRepository itemPropertyRepository = ObjectContainer.Resolve<IItemPropertyRepository>();
IList<ItemProperty> itemPropertyCollection = itemPropertyRepository.LoadAll();
totalCount = itemPropertyCollection.Count;
currentCount = 0;
foreach (var itemProperty in itemPropertyCollection)
{
try
{
message = string.Empty;
currentCount++;
if (itemProperty.DeletedDate == null && (itemProperty.MetaItemProperty.ValueType == MetaItemPropertyValueType.MetaItemTableProperty || itemProperty.MetaItemProperty.ValueType == MetaItemPropertyValueType.MetaItemTableMultiSelectProperty))
{
//Property refresh issue in only applicable for table and multitable property.
//Need to filter the itemproperty for Table and multitable select property.
message = ProcessItemProperty(itemProperty);
//txtLogDetails.Text += message + Environment.NewLine;
//txtLogDetails.Refresh();
//txtLogDetails.ScrollToCaret();
}
//Log(message);
//progressBar.Value = (Int32)(currentCount * 100 / totalCount);
//progressBar.Refresh();
Invoke(new MyDelegate(ShowProgressBar), (Int32)(currentCount * 100 / totalCount));
}
catch (Exception ex)
{
txtLogDetails.Text += "EXCEPTION ERROR : " + itemProperty.Id.ToString();
dataContext.RollBackTransaction();
}
}
dataContext.CompleteTransaction();
}
}
delegate void MyDelegate(int percentage);
private void ShowProgressBar(int percentage)
{
progressBar.Value = percentage;
progressBar.Refresh();
//txtLogDetails.Text = message;
}
When it is executing " Invoke(new MyDelegate(ShowProgressBar), (Int32)(currentCount * 100 / totalCount));" this line it goes out of scope. It goes inside and never came back. and also havn't caught in exception.
Can anyone please help me out from this?
Thanks,
Mahesh
The control progressBar must be accessed from the thread that it was created on. Use BeginInvoke.
I would replace this line ...
Invoke(new MyDelegate(ShowProgressBar), (Int32)(currentCount * 100 / totalCount));
... by this one ...
this.progressBar.BeginInvoke(
(MethodInvoker)delegate() {
this.progressBar.Value =
Convert.ToInt32(currentCount * 100 / totalCount); } );
Or you can replace those lines ...
progressBar.Value = percentage;
progressBar.Refresh();
//txtLogDetails.Text = message;
... by those lines...
this.progressBar.BeginInvoke(
(MethodInvoker)delegate() {
progressBar.Value = percentage;
progressBar.Refresh();
//txtLogDetails.Text = message;
} );
I think the problem is that you block the UI thread with Thread.Join.
Thread.Join will in theory continue to pump UI messages but in reality it doesn't always work.
see Chris Brumme's blog here. Specially
The net effect is that we will always pump COM calls waiting to get into your STA. And any SendMessages to any windows will be serviced. But most PostMessages will be delayed until you have finished blocking.
You should let the button event finish and have the new thread post back a message when it is done (e.g. by using backgroundworker or some other async framework)
(Your catch statement is useless anyway since it will only catch thread creations exceptions.)