Flutter - how to run script from relative path - flutter

I have a .vbs script inside the flutter project that I'd like to run after the press of the button. I have verified, that the script works if path to it is provided as an absolute. However, for some mysterious reason, it does not work when I change the path to relative. I am losing my mind, am I doing anything wrong? Is there any other way how to solve this? (Maybe placing the script to the assets)?
Directory structure is following:
lib
│
└───screens
│ │ destination_selection_screen.dart
│
└───scripts
│ runAssemblySyncHidden.vbs
I would like to run runAssemblySyncHidden.vbs from the destination_selection_screen.dart so logically my relative path looks like this:
onPressed: () async {
final process = await Process.start(
'..\\scripts\\runAssemblySyncHidden.vbs', [],
runInShell: true);
}
Unfortunately, does not seem to work. But when I provide an absolute path to the script, it does. Any help appreciated.
start runAssemblySyncHidden.vbs

Related

Where on disk is the BIOS file used by Simics?

(I saw one of my previous posts didn't actually answer the "where's the BIOS file used by simics?" question, so I renamed the previous one and am pulling that question out and making it standalone here.)
I can see the BIOS code for a default "targets\qsp-x86\firststeps.simics" invocation by just stepping through the debugger from the start. But if I want to see the full binary, is there a specific file somewhere I can look at?
you can check "bios" attribute on motherboard image:
simics> board.mb->bios
"%simics%/targets/qsp-x86/images/SIMICSX58IA32X64_1_0_0_bp_r.fd"
You can specify what BIOS image to use by bios_image script parameter to qsp-clear-linux.simics scripts.
Help info for the script:
$ ./simics -h targets/qsp-x86/qsp-clear-linux.simics
System:
bios_image - existing file or NIL
BIOS file.
Default value:
"%simics%/targets/qsp-x86/images/SIMICSX58IA32X64_1_0_0_bp_r.fd"
you can run with your own BIOS like this:
$ ./simics -e '$bios_image=my-bios.bin' targets/qsp-x86/qsp-clear-linux.simics
Now the BIOS is not quite handled consistently with some other things. Typically in Simics, disks and similar things are images. You can list them using list-persistent-images and resolve locations using lookup-file:
simics> list-persistent-images
┌─────────────────────┬────────────┬───────────────────────────────────────────────────────┐
│Image │Unsaved data│File(s) (read-only/read-write) │
├─────────────────────┼────────────┼───────────────────────────────────────────────────────┤
│board.disk0.hd_image │ no│%simics%/targets/qsp-x86/images/cl-b28910-v2.craff (ro)│
│board.disk1.hd_image │ no│ │
│board.mb.sb.spi_image│ yes│%simics%/targets/qsp-x86/images/spi-flash.bin (ro) │
└─────────────────────┴────────────┴───────────────────────────────────────────────────────┘
simics> lookup-file "%simics%/targets/qsp-x86/images/spi-flash.bin"
"/disk1/simics-6/simics-qsp-x86-6.0.47/targets/qsp-x86/images/spi-flash.bin"
The BIOS in the QSP is just loaded straight into target memory for execution. Which is a bit of a cheat for convenience.
Upon searching around, I found the following folder:
C:\Users\yourusername\AppData\Local\Programs\Simics\simics-qsp-x86-6.0.44\targets\qsp-x86\images
Inside that folder are the following 3 files:
SIMICSX58IA32X64_1_0_0_bp_r.fd
SIMICSX58IA32X64-ahci.fd
spi-flash. bin
Both SIMICSX58IA32X64_1_0_0_bp_r. fd and SIMICSX58IA32X64-ahci.fd have UEFI filevolume headers at the start, and a seeming BIOS entry point at the end. The spi-flash. bin seems to have a placeholder of the flash descriptor which would go at the start of the flash, but is mostly empty. So I believe Intel basically either stitches these together in memory, or possibly just uses the spi-flash. bin to allow for "soft strap" configuration or somesuch (since it's a virtual MCH/ICH anyway.)

How do I use the "Simics Training" and "QSP CPU" packages?

1 - There's a "Simics Training" package shown in the package manager, and a "targets\simics-user-training" and " targets\workshop-01". Where is the documentation about starting up and going through these trainings? (I assume this is different than just the normal "my-simics-project-1/documentation.html" documentation, because that documentation doesn't ever reference either of those targets in the Getting Started section)
2 - In the documentation there's a line: "The QSP-x86 package contains a legacy processor core which is used by default in the included simulated machines. To use more modern processors, the package QSP-CPU can be installed, which contains recent processor cores." How does one actually use the QSP-CPU to select a different CPU to be simulated? (Related: I see in the release notes a bunch of mentions of ICH10. Is that what the default QSP-x86 "targets\qsp-x86\firststeps.simics" is simulating? Ideally I'd like to simulate at least a PCH-based system.)
#Point 1
If you check the doc/ folder in your SImics project, you should have the lab instructions. It is a bit inconsistent that they are stand-alone PDFs, but that comes from how they are built currently. Look for nut-001 and workshop-01.
#Point 2 (and how come StackOverflow does not have heading styles? You can really use those to write nicely structured answers)
If you have installed everything, use the scripts "qsp-atom-core.simics" etc. to run the standard QSP setup but with a different type of core. For example:
> simics.bat targets\qsp-x86\qsp-client-core.simics
To see how that core is selected, open the script file. For example, to look at the client core script, first type/cat the trampoline script in the project. Then, go and open or cat or type the script file itself. For example:
C:\Users\jengblo\simics-projects\my-simics-project-5>type targets\qsp-x86\qsp-client-core.simics
# Auto-generated file. Any changes will be overwritten!
decl { substitute "C:\\Users\\jengblo\\AppData\\Local\\Programs\\Simics\\simics-qsp-cpu-6.0.1\\targets\\qsp-x86\\qsp-client-core.simics" }
run-command-file "C:\\Users\\jengblo\\AppData\\Local\\Programs\\Simics\\simics-qsp-cpu-6.0.1\\targets\\qsp-x86\\qsp-client-core.simics"
Given that trampoline, go to the actual script file:
C:\Users\jengblo\simics-projects\my-simics-project-5>type C:\\Users\\jengblo\\AppData\\Local\\Programs\\Simics\\simics-qsp-cpu-6.0.1\\targets\\qsp-x86\\qsp-client-core.simics
# In order to run this, the QSP-x86 (2096), QSP-CPU (8112) and
# QSP-Clear-Linux (4094) packages should be installed.
decl {
! Script that runs the Quick Start Platform (QSP) with a client processor core.
params from "%simics%/targets/qsp-x86/qsp-clear-linux.simics"
default cpu_comp_class = "x86-coffee-lake"
default num_cores = 4
}
run-command-file "%simics%/targets/qsp-x86/qsp-clear-linux.simics"
And note how the "cpu_comp_class" parameter is set. The way to find available such classes in a bit obscure, admittedly. In your running Simics session started from the client-core script (for example), check the types of the components inside the motherboard.
simics> list-components board.mb
┌─────────┬─────────────────────────┐
│Component│Class │
├─────────┼─────────────────────────┤
│cpu0 │processor_x86_coffee_lake│
│gpu │pci_accel_vga_comp │
│memory │simple_memory_module │
│nb │northbridge_x58 │
│sb │southbridge_ich10 │
└─────────┴─────────────────────────┘
Note the class of the cpu0 component. To find other classes from the same pattern, use the list-classes command:
simics> list-classes substr = processor_x86
The following classes are available:
┌─────────────────────────────┬──────────────────────────────┐
│ Class │ Short description │
├─────────────────────────────┼──────────────────────────────┤
│processor_x86QSP1 │N/A (module is not loaded yet)│
│processor_x86QSP2 │N/A (module is not loaded yet)│
│processor_x86_airmont │N/A (module is not loaded yet)│
│processor_x86_broadwell_xeon │N/A (module is not loaded yet)│
...
You can then build a custom script to start with a given core. Follow the pattern of "qsp-client-core.simics" as found in the installation. Copy that file into your project, and modify the core class as well as other parameters.

Powershell navigating to unknown directory

I have stumbled the unfortunate situation, having to be in a directory in which another directory is located:
C:\Test\[Folder with unknown name]\theFileINeed.txt
The structure mentioned above originates from a Zip-file from an external source. So i can not change the structure.
My goal is to navigate to the Directory with the unknown name, so it is my working directroy and I can execute further commands there. (Like Get-Childitem e.g.)
Is there a simple way to e.g. use the cd command to move into that directory?
I have fiddled around a bit with Resolve-Path but couldn't find a helpful solution.
Thanks in advance.
Consider this structure:
C:\TMP\TEST
├───unknowndir1
│ │ nonuniquefile.txt
│ │ uniquefile.txt
│ │
│ ├───nonuniquesubdir
│ └───uniquesubdir
└───unknowndir2
│ nonuniquefile.txt
│
└───nonuniquesubdir
You could do cd .\test\*\uniquesubdir but you can't cd .\test\*\nonuniquesubdir as you'll gen an error (...) path (...) resolved to multiple containers. The same error is even with cd .\test\*\uniquesubdir\.. as if it didn't even check for existence of uniquesubdir.
So if you want to enter unknown directory based of a file it contains, you'd have to do something like this: cd (Get-Item .\test\*\uniquefile.txt).DirectoryName.
It will fail if you use nonuniquefile.txt as it again resolves to two different directories. You could enter the first of these directories with cd (Get-Item .\test\*\nonuniquefile.txt).DirectoryName[0] if you don't care which of them you use.

How to import data from cloud firestore to the local emulator?

I want to be able to run cloud functions locally and debug against a copy from the production data.
Is there a way to copy the data that is online to the local firestore emulator?
This can be accomplished through a set of commands in terminal on the existing project:
1. Login to firebase and Gcloud:
firebase login
gcloud auth login
2. See a list of your projects and connect to one:
firebase projects:list
firebase use your-project-name
gcloud projects list
gcloud config set project your-project-name
3. Export your production data to gcloud bucket with chosen name:
gcloud firestore export gs://your-project-name.appspot.com/your-choosen-folder-name
4. Now copy this folder to your local machine, I do that in functions folder directly:
Note : Don't miss the dot ( . ) at the end of below command
cd functions
gsutil -m cp -r gs://your-project-name.appspot.com/your-choosen-folder-name .
5. Now we just want to import this folder. This should work with the basic command, thanks to latest update from Firebase team https://github.com/firebase/firebase-tools/pull/2519.
firebase emulators:start --import ./your-choosen-folder-name
Check out my article on Medium about it and a shorthanded script to do the job for you https://medium.com/firebase-developers/how-to-import-production-data-from-cloud-firestore-to-the-local-emulator-e82ae1c6ed8
Note: Its better to use a different bucket for it, as copying into your project bucket will result in the folder created in your firebase storage.
If you are interested in gsutil arguments like -m, you can see them described by executing gsutil --help.
My method is somewhat manual but it does the trick. I've shared it in this useful Github thread but I'll list the steps I did here if you find them useful:
Go to my local Firebase project path.
Start the emulators using: firebase emulators:start
Create manually some mockup data using the GUI at http://localhost:4000/firestore using the buttons provided: + Start Collection and + Add Document.
Export this data locally using: emulators:export ./mydirectory
About the project data located at Firebase Database / Cloud Firestore, I exported a single collection like this: gcloud firestore export gs://my-project-bucket-id.appspot.com --collection-ids=myCollection The export is now located under Firebase Storage in a folder with a timestamp as name (I didn't use a prefix for my test)
Download this folder to local drive with: gsutil cp -r gs://my-project-bucket-id.appspot.com/myCollection ./production_data_export NOTE: I did this in a Windows environment... gsutil will throw this error: "OSError: The filename, directory name, or volume label syntax is incorrect" if the folder has invalid characters for a folder name in Windows (i.e. colons) or this error: "OSError: Invalid argument.9.0 B]" if an inner file in the folder has invalid characters too. To be able to download the export locally, rename these with a valid Windows name (i.e. removing the colons) like this: gsutil mv gs://my-project-bucket-id.appspot.com/2020-05-22T02:01:06_86152 gs://my-project-bucket-id.appspot.com/myCollection
Once downloaded, imitate the local export structure renaming the folder to firestore_export and copying the firebase-export-metadata.json file from the local export folder. Just to be visual, here's the structure I got:
$ tree .
.
├── local_data_export
│ ├── firebase-export-metadata.json
│ └── firestore_export
│ ├── all_namespaces
│ │ └── all_kinds
│ │ ├── all_namespaces_all_kinds.export_metadata
│ │ └── output-0
│ └── firestore_export.overall_export_metadata
└── production_data_export
├── firebase-export-metadata.json
└── firestore_export
├── all_namespaces
│ └── kind_myCollection
│ ├── all_namespaces_kind_myCollection.export_metadata
│ ├── output-0
│ └── output-1
└── firestore_export.overall_export_metadata
8 directories, 9 files
Finally, start the local emulator pointing to this production data to be imported: firebase emulators:start --import=./mock_up_data/production_data_export/
You should see the imported data at: http://localhost:4000/firestore/
This should assist readers for now, while we await a more robust solution from the Firebase folks.
You can use the firestore-backup-restore to export and import your production data as JSON files.
I wrote a quick hack to allow for importing these JSON in the Firebase Simulator Firestore instance.
I proposed a pull request and made this npm module in the meantime.
You can use it this way:
const firestoreService = require('#crapougnax/firestore-export-import')
const path = require('path')
// list of JSON files generated with the export service
// Must be in the same folder as this script
const collections = ['languages', 'roles']
// Start your firestore emulator for (at least) firestore
// firebase emulators:start --only firestore
// Initiate Firebase Test App
const db = firestoreService.initializeTestApp('test', {
uid: 'john',
email: 'john#doe.com',
})
// Start importing your data
let promises = []
try {
collections.map(collection =>
promises.push(
firestoreService.fixtures(
path.resolve(__dirname, `./${collection}.json`),
[],
[],
db,
),
),
)
Promise.all(promises).then(process.exit)
} catch (err) {
console.error(err)
}
Obviously, since this data won't persist in the emulator, you'll typically inject them in the before() function of your test suite or even before every test.
There is no built-in way to copy data from a cloud project to the local emulator. Since the emulator doesn't persist any data, you will have to re-generate the initial data set on every run.
I was able to make some npm scripts to import from remote to local emulator and vice-versa.
"serve": "yarn build && firebase emulators:start --only functions,firestore --import=./firestore_export",
"db:update-local-from-remote": "yarn db:backup-remote && gsutil -m cp -r gs://my-firebase-bucket.appspot.com/firestore_export .",
"db:update-remote-from-local": "yarn db:backup-local && yarn db:backup-remote && gsutil -m cp -r ./firestore_export gs://my-firebase-bucket.appspot.com && yarn run db:import-remote",
"db:import-remote": "gcloud firestore import gs://my-firebase-bucket.appspot.com/firestore_export",
"db:backup-local": "firebase emulators:export --force .",
"db:rename-remote-backup-folder": "gsutil mv gs://my-firebase-bucket.appspot.com/firestore_export gs://my-firebase-bucket.appspot.com/firestore_export_$(date +%d-%m-%Y-%H-%M)",
"db:backup-remote": "yarn db:rename-remote-backup-folder && gcloud firestore export gs://my-firebase-bucket.appspot.com/firestore_export"
So you can export the local Firestore data to remote with:
npm db:update-remote-from-local
Or to update your local Firestore data with remote one, do:
npm db:update-local-from-remote
These operations will backup the remote Firestore data, making a copy of it and storing it on Firebase Storage.
I was about to go add a cli option to firebase-tools but pretty happy with the node-firestore-import-export package.
yarn add -D node-firestore-import-export
"scripts": {
"db:export": "firestore-export -a ./serviceAccountKey.json -b ./data/firestore.json",
"db:import": "firestore-import -a ./serviceAccountKey.json -b ./data/firestore.json",
"db:emulator:export": "export FIRESTORE_EMULATOR_HOST=localhost:8080 && yarn db:export",
"db:emulator:import": "export FIRESTORE_EMULATOR_HOST=localhost:8080 && yarn db:import",
"db:backup": "cp ./data/firestore.json ./data/firestore-$(date +%d-%m-%Y-%H-%M).json",
"dev": "firebase emulators:start --import=./data --export-on-exit=./data",
},
You will need to create a service account in the firebase console.
You can replace the GCLOUD_PROJECT environment variable with hard coded values.
open https://console.firebase.google.com/project/$GCLOUD_PROJECT/settings/serviceaccounts/adminsdk
mv ~/Downloads/myProjectHecticKeyName.json ./serviceAccountKey.json
That being said the gcloud tools are definitely the way to go in production, as you will need s3 backups anyway.
you can use fire-import npm package. for importing both firestore and firebase storage
There is also a way to import data to local storage from Google Cloud Storage without any commands:
export Firestore to Google cloud storage bucket by clicking More in google cloud
choose your desired file in google cloud storage bucket
open terminal (Google terminal shell near the search bar)
in terminal click Open editor
right click on desired file in online VSCode and click download.
You shoud start downloading of .tar file which is in fact your exported data from firestore.
Create a folder in your root (as example you may call it 'firestore-local-data')
Copy paste (or unarchive data) to this folder from archive file .tar
run firebase emulators:start --import ./firestore-local-data
This should do the trick
I wrote a little script to able to do that:
const db = admin.firestore();
const collections = ['albums', 'artists'];
let rawData: any;
for (const i in collections) {
rawData = fs.readFileSync(`./${collections[i]}.json`);
const arr = JSON.parse(rawData);
for (const j in arr) {
db.collection(collections[i]).add(arr[j])
.then(val => console.log(val))
.catch(err => console.log('ERRO: ', err))
}
}

'length error in the tickerplant kdb+/q

When I start up the tick.q with sym.q and feed.q with files provided as follows:
q tick.q sym -p 5010
q feed.q
Github links: https://github.com/KxSystems/cookbook/tree/master/start/tick ,
https://github.com/KxSystems/kdb-tick
The tickerplant process prints 'length error on every update, which usually occurs when incorrect number of elements is passed: https://code.kx.com/wiki/Errors
I suspect that this happens when the feed process calls .u.upd
Any suggestions as to how to solve this problem?
Entering \e 1 into the command line will suspend execution and run the debugger allowing you to see what failed and query the variables which should help pinpoint what is causing the issues.
More about debugging here https://code.kx.com/q/ref/debug/
If you are using the plain vanilla tick setup from KX there is no reason for that error to appear.
Also, I think you need to start the feed as feed.q -t 200 otherwise you will get no data coming through.
Usually the 'length error appears when the table schema does not match. So if you have the sym.q file (and it is loaded correctly) you should not have that issue.
Just to confirm this is the structure of your directory:
.
├── feed.q
├── README.md
├── tick
│   ├── r.q
│   ├── sym.q
│   └── u.q
└── tick.q
The sym.q file contains your table schema. If you change something in the feedhandler the table schema in the sym.q must match that change (i.e if you add a column in the feed you must also add a holder in the table for that column)
Open a new q session on some port (9999), add your schema definition there and define insert as .u.upd or something like this :
.u.upd:{[t;d]
.test.t:t;
.test.d:d;
t upsert d
}
Now point your feed to this q session and stream some data; this will enable you to analyse the test variables in case of the errors.