I have two functions:
private void Function1()
{
Function2();
// Do other stuff here which gets executed
}
private void Function2()
{
Application.LoadLevel("Level");
}
I had always lived in the thought that calling Application.LoadLevel() is immediate, but instead the other stuff in the Function1 does get executed.
Has this been changed in recent versions or has it always been there?
Application.LoadLevel is immediate in the sense that frames aren't generated until the level is loaded, but the current frame still ends.
This means that code register to execute will get executed.
It won't stop the current method from ending.
You could use coroutines to achieve that effect.
private void Function1()
{
StarCoroutine(Coroutine1());
}
private void Function2()
{
Application.LoadLevel("Level");
}
private IEnumerator Coroutine1()
{
Function2();
yield return null;
// Do other stuff here which gets executed
}
Related
i call process
I call function it when the process dies
IEnumeratorLoadImageSet is being called startcorutine but it is not called
If startcorutine is commented out, this loop goes well, but if not commented out, this loop is called only once and stops.
public void StartProcess(string path)
{
process= DI.Process.Start(path);
process.EnableRaisingEvents = true;
process.Exited += (obj, e) =>
{
Debug.Log(2);
string str=ReadFile();
string[] arrStr=str.Split(' ');
for(int i=0; i<arrStr.Length-1;i++)
{
Debug.Log(arrStr.Length);
FileInfo fileInfo=new FileInfo(arrStr[i]);
importImage.LoadImageBut(arrStr[i], fileInfo.Name,fileInfo.LastWriteTime);
}
};
public void LoadImageBut(string filePath,string fileNames,DateTime dateTime)
{
fileName = fileNames;
fileInfo = dateTime.ToString();
Debug.Log(filePath);
//this part
StartCoroutine(LoadImageSet(filePath));
}
IEnumerator LoadImageSet(string filePath)
{
Debug.Log(33);
using (UnityWebRequest uwr = UnityWebRequestTexture.GetTexture(dataPath))
{
yield return null;
yield return uwr.SendWebRequest();
if (string.IsNullOrEmpty(uwr.error))
{
tex = DownloadHandlerTexture.GetContent(uwr);
}
else
{
Debug.Log(uwr.error);
}
}
}
StartCoroutine(LoadImageSet(filePath))
If you call it from start, it goes well
Call StartCoroutiune from a MonoBehaviour's Start or Awake method.
StartCoroutine is meant to be used from the main thread, start the Coroutine from somewhere in the unity object life-cycle such as a MonoBehaviour's Awake() or Start() method, and it should work fine. By subscribing to the process.Exit delegate, the resulting code occurs off the unity main thread where coroutines won't work.
To avoid this problem, might I suggest something like the following:
public class ImageLaoder : MonoBeaviour {
private Queue<string> _imageQueue;
private ImageLoader Instance {get; private set;}
public static void LoadImage(string dataPath) =>
Instance._imageQueue.Enqueue(dataPath);
IEnumerator Start(){
Instance = this;
while(true){
yield return null;
if(_imageQueue.Any()){
var nextImage = _imageQueue.Dequeue();
StartCoroutine(LoadImageSet(nextImage));
}
}
}
IEnumerator LoadImageSet(string filePath){...}
}
Now you can call ImageLoader.LoadImage(dataPath) off the main thread, and the MonoBehaviour's coroutine will pick up the work and process it on the main thread, where your LoadImageSet coroutine works again.
I am currently developing a custom EditorWindow extension in Unity right now.
I have overriden the Update() function, and when certain conditions are met I call the Repaint(); method to update the UI accordingly.
public class MyAwesomePlugin : EditorWindow
{
...
public void Update()
{
if (condition_1())
{
...
Repaint();
}
if (condition_2())
{
...
Repaint();
}
}
}
My question is whether or not multiple calls to Repaint(); in the same execution time-frame will cause multiple duplicate redraws, or is Unity smart enough to aggregate them and only redraw once.
It would be better to create and set a flag variable bool isDirty = false.
public void Update()
{
bool isDirty = false;
if (condition_1())
{
...
isDirty = true;
}
if (condition_2())
{
...
isDirty = true;
}
if (isDirty) Repaint();
}
This bypasses the question, but any unnecessary function calls will adversely effect performance.
If there are return statements in Update, after isDirty could be set to True, place if (isDirty) Repaint(); before the return.
I'm writing some wrapper classes around zurb foundation.
Foundation widgets need an init() function to be called after the elements have been added to the DOM.
I can accomplish this easily enough with this method:
public static void initWidgets() {
Scheduler.get().scheduleDeferred(new Scheduler.ScheduledCommand() {
#Override
public void execute() {
foundationInit();
}
});
}
...where foundationInit() is a JSNI call to the foundation init() function. I then add a call to initWidgets() in the constructor of any foundation element. So far so good.
However, if multiple foundation widgets are added to the DOM during a particular event loop, then the init() method will be called multiple times. Foundation doesn't actually care about this, but it would be nice to find a way around this.
Is there any scheduler functionality / pattern that'd allow me to schedule a particular command to run only once, no matter how many times the schedule method is called with that command?
Something like: scheduleDeferredIfNotAlreadyScheduled(Command c)
I don't know how to get a handle on the event loop, so I don't know how to reset a flag that'd tell me whether or not to add the command or not.
I don't know any Scheduller command to do that, but it could be done with a static boolean variable, e.g.:
private static boolean initialized;
public static void initWidgets() {
initialized = false;
Scheduler.get().scheduleDeferred(new Scheduler.ScheduledCommand() {
#Override
public void execute() {
if (!initialized) {
initialized = true;
foundationInit();
}
}
});
}
In such case I usually use Guava's Supplier.double checked locking is really safe.
public static Supplier<Boolean> supplier=Suppliers.memoize(new Supplier<Boolean>() {
#Override
public Boolean get() {
foundationInit();
return true;
}
});
public static void initWidgets() {
Scheduler.get().scheduleDeferred(new Scheduler.ScheduledCommand() {
#Override
public void execute() {
boolean initialized=supplier.get();
}
});
}
My problem is that in the main class I have some osgi references that work just fine when the class is call. But after that all the references became null. When I close the main windows and call shutdown method, the hubService reference returns null. What do I do wrong here?
private void shutdown() {
if(hubService == null) {
throw new NullPointerException();
}
hubService.shutdownHub(); // why is hubService null?
}
// bind hub service
public synchronized void setHubService(IHubService service) {
hubService = service;
try {
hubService.startHub(PORT, authenticationHandler);
} catch (Exception e) {
JOptionPane.showMessageDialog(mainFrame, e.toString(), "Server", JOptionPane.ERROR_MESSAGE);
System.exit(0);
}
}
// remove hub service
public synchronized void unsetHubService(IHubService service) {
hubService.shutdownHub();
hubService = null;
}
If a field can be read and written by multiple threads, you must protect access to read as well as write. Your first method, shutdown, does not protect the read of hubService so that the value of hubService can change between the first read and the second read. You don't show the declaration of the hubService field. You could make it volatile or only read when synchronized (on the same object used to synchronized when writing the field). Then your shutdown implementation could look like:
private volatile IHubService hubService;
private void shutdown() {
IHubService service = hubService; // make a copy of the field in a local variable
if (service != null) // use local var from now on since the field could have changed
service.shutdownHub();
}
I assume your shutdown method is the DS deactivate method? If so, why do you shutdown in the unset method as well in the shutdown method?
Overall the design does not seem very sound. The IHubService is used as a factory and should return some object that is then closed in the deactivate method. You made the IHubService effectively a singleton. Since it must come from another bundle, it should handle its life cycle itself.
Since you also do not use annotations, it is not clear if your set/unset methods are static/dynamic and/or single/multiple. The following code should not have your problems (exammple code with bnd annotations):
#Component public class MyImpl {
IHubService hub;
#Activate
void activate() {
hubService.startHub(PORT, authenticationHandler);
}
#DeActivate
void deactivate() {
hubService.shutdown();
}
#Reference
void setHub(IHubService hub) { this.hub = hub; }
}
Here is a sample code to retrieve data from a database using the yield keyword that I found in a few place while googling around :
public IEnumerable<object> ExecuteSelect(string commandText)
{
using (IDbConnection connection = CreateConnection())
{
using (IDbCommand cmd = CreateCommand(commandText, connection))
{
connection.Open();
using (IDbDataReader reader = cmd.ExecuteReader())
{
while(reader.Read())
{
yield return reader["SomeField"];
}
}
connection.Close();
}
}
}
Am I correct in thinking that in this sample code, the connection would not be closed if we do not iterate over the whole datareader ?
Here is an example that would not close the connection, if I understand yield correctly..
foreach(object obj in ExecuteSelect(commandText))
{
break;
}
For a db connection that might not be catastrophic, I suppose the GC would clean it up eventually, but what if instead of a connection it was a more critical resource?
The Iterator that the compiler synthesises implements IDisposable, which foreach calls when the foreach loop is exited.
The Iterator's Dispose() method will clean up the using statements on early exit.
As long as you use the iterator in a foreach loop, using() block, or call the Dispose() method in some other way, the cleanup of the Iterator will happen.
Connection will be closed automatically since you're using it inside "using" block.
From the simple test I have tried, aku is right, dispose is called as soon as the foreach block exit.
#David : However call stack is kept between call, so the connection would not be closed because on the next call we would return to the next instruction after the yield, which is the while block.
My understanding is that when the iterator is disposed, the connection would also be disposed with it. I also think that the Connection.Close would not be needed because it would be taken care of when the object is disposed because of the using clause.
Here is a simple program I tried to test the behavior...
class Program
{
static void Main(string[] args)
{
foreach (int v in getValues())
{
Console.WriteLine(v);
}
Console.ReadKey();
foreach (int v in getValues())
{
Console.WriteLine(v);
break;
}
Console.ReadKey();
}
public static IEnumerable<int> getValues()
{
using (TestDisposable t = new TestDisposable())
{
for(int i = 0; i<10; i++)
yield return t.GetValue();
}
}
}
public class TestDisposable : IDisposable
{
private int value;
public void Dispose()
{
Console.WriteLine("Disposed");
}
public int GetValue()
{
value += 1;
return value;
}
}
Judging from this technical explanation, your code will not work as expected, but abort on the second item, because the connection was already closed when returning the first item.
#Joel Gauvreau : Yes, I should have read on. Part 3 of this series explains that the compiler adds special handling for finally blocks to trigger only at the real end.