I need to use the goal seek function in Excel to process several calculations in background process.
Already using approach using while looping and search value itself by changing cell and check the output value.
I wonder if there is any other option to calculate goal seek value with background process, for example trigger goal seek using PowerShell or anything tools.
Related
I am simultaneously running a model with different input values and it is producing different output on each run. I am trying to create a code that will get anylogic to wright each experiment output run in a different cell in excel sheet i.e. throughput Vs. Time. I am using dataset. Wondering If there is any script or hint can help in solving the issue?
Currently I am using the following commands. They keep overwriting the output using the same cells.
Out_excelFile1.setCellValue("Sink1 Out",2,2,2);
Out_excelFile1.writeDataSet(Sink1_D,2,3,2);
Best if you actually use the build-in database for outputs and only write to Excel at the end of all runs, tbh.
But in your case, you need to change the row number by your replication/iteration number. Use getCurrentIteration() or getCurrentReplication() in your "after simulation run" or "after replication" or "after iteration" experiment code sections to get this right.
Then, it would look something like Out_excelFile1.setCellValue("Sink1 Out",2,getCurrentIteration(),2);
(Details depend on your actual implementation, check the help for further info on replications, iterations and those functions)
I have been stuck with this problem for about 2 months now. I have 2 different GUI's, one built with Windows Forms and the other is WPF. They each have text boxes that I want to refresh and update every few seconds. I have accomplished this no porblem, however, using both the $Form.Refresh() and textbox.Refresh() method in loops mean you cannot interact with the rest of the GUI's tabs and buttons as I had to put it into an infinite loop to update the text boxes every few seconds.
I have explored the start-job to do the data gathering for the text boxes, but I still get stuck with trying to update the text boxes without looping the script to use the Receive-Job.
Can someone tell me how to go about independently updating the cells, while still allowing other interaction with the GUI. Right now I have an external GUI that is stuck in an infinite loop to display the data while the master GUI is free to do the rest of the work and kill the data GUI when completed.
The simplest thing I end up doing for infinite loops in GUI is to add this inside the loop.
[System.Windows.Forms.Application]::DoEvents()
It is quick and dirty but works. So you can use this in the loop that waits for the Receive-job.
On another note, Jobs are not too efficient. Each job creates an individual process and can get messy when you have multiple jobs. If you are running into that, check out PowerShell Runspaces. Best way I know to handle multithreading.
Tableau is an excellent tool for visualizing data. However, it is designed to be the final stop in a data (ETL) pipeline.
My Tableau workbook uses a bunch of Table Calcs to generate a list of "recommended orders". Rather than view these, I want to automate and execute them. This would make Tableau the engine of a quasi-ML process.
In other words, I would like to make Tableau a part of my ETL pipeline and send data to another tier. How can I write a back-end program that executes my Tableau workbook and receives a results dataset?
See the end of this article for example data I want to automate:
http://robm26.blogspot.com/2015/10/keep-your-factory-humming-with-tableau.html
Any ideas?
You're not not going to like the answer I'm going to give you -- "Don't do this".
Tableau isn't meant to be a task in a larger ETL pipeline and the reason you're having problems making it behave the way you want is it's not meant to be done.
Above and beyond the fact that you've figured out how to get a result that you want in Tableau ("the work is done"), Tableau isn't offering you any real value in the scenario you're describing. Use a tool (like Alteryx) that is really purpose built for this sort of work.
The above answer is correct that tabcmd is the way to pull it out. We use a function in python to generate the tabcmd requests so that they can be batched.
import subprocess
def runTabCmd(cmd):
# run tableau command and display the output
print cmd
if run_tabcmd == 'yes':
p = subprocess.Popen(
cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in p.stdout.readlines():
print line
You probably already knew that, but for us it was a way to completely automate the pulling and loading into another python package like scikit-learn for a streamlined ML solution
I'm editing this answer to agree with Russell's answer. Tableau is not an ETL tool and should not be used as such. If you absolutely have to do something, you can use what I provided. Otherwise, the best practice is to use a tool designed for the job.
You can easily use tabcmd to get the results of a view in CSV, which can be used later in your ETL process. If you need to automate it, you can write a script and execute it with a cron job. I, myself, have a few views that are exported to CSV and used later in my ETL stream to feed our CRM.
Just remember to create the view exactly as you want it to be exported to CSV - usually including the order of the fields. Another tip is that I don't let it use the default "Measure Names" and "Measure Values" - to make sure everything is good on my CSV, I have the fields added manually in the row/columns section.
I have a relatively large spreadsheet (300 rows, 30 columns) that I color based on the values in the spreadsheet. I'm doing accessing the API minimally using only two accesses:
getValues(...) to access all the values of the data range.
setBackgrounds(...) to set all the backgrounds of the data range.
This runs in about half a second or less. However, it gets in the way if I make it run on every edit using onEdit(), but I also don't want it to be updated at regular time intervals when I'm not editing it, seems like a waste. Is there a good way to make the script run in a "delayed" way, updating at regular time intervals while I'm editing?
Firstly, I would say you should look at Google Sheets' conditional formatting (Format > Conditional formatting menu item in Sheets) -- you may be able to do much of what you need without involving Apps Script at all.
Failing that, you can set up a regular time-based trigger to check for edits and change the backgrounds appropriately. You can support this trigger with a separate onEdit() trigger to record what has changed internally. The flow goes like this:
A change is made and onEdit() triggers
The onEdit() trigger only records the changed cell locations to a local variable or Cache
A time-based trigger fires every minute/hour/whenever
The time-based trigger checks the cache for edited cells, alters their backgrounds, then clears them from the cache
That said, depending on your workflow this approach may not be much better than simply using a time trigger to change the cells directly.
Recently I got access to run my codes on a cluster. My code is totally paralleizable but I don't know how to best use its parallel nature. I've to compute elements of a big matrix and each of them are independent of the others. I want to submit the job to run on several machine (like 100) to speed up the computation of the matrix.
Right now, I wrote a script to submit multiple jobs each responsible to compute a part of the matrix and save it in a .mat file. At the end I'm merging them to get the whole matrix. For submitting each individual job, I've created a new .m file (run1.m, run.2, ...) to set a variable and then run the function to compute the associated part in the matrix. So basically run1.m is
id=1;compute_dists_matrix
and then compute_dists_matrix uses id to find the part it is going to compute. Then I wrote a script to create run1.m through run60.m and the qsub them to the cluster.
I wonder if there is a better way to do this using some MATLAB features for example. Because this seems to be a very typical task.
Yes, it works, but is not ideal, and as you say is a common problem. Matlab has a parallel programming toolkit.
Does your cluster have this? If so, the distributed arrays is worth having a look at. If they don't have access to it, then what you are doing is the only other way. You can wrap your run1.m,run2.m in a controlling script to automate it for you...
I believe you could use command line arguments for the id and submit jobs with a range of values for this id. Command line arguments can be processed by launching MATLAB from the command line without the IDE and providing the name of the script to be executed and the list of arguments. I would think you can set up dependencies in your job manager and create a "reduce" script to merge the partial results (from files). The whole process could be managed from a single script that would generate the id & other necessary arguments and submit the processing & postprocessing jobs with dependencies.