touch screen touch not working properly even after calibration using ts_calibrate - touch

We are using touchscreen with touch driver (ft5x06). Now to calibrate and test the touch device i downloaded tslib from github and installed to separate folder by following commands:
cd ~/tslib
./autogen.sh
./configure --prefix=/home/user2/Desktop/tslib_arm
make
sudo make install
now on my embedded board's Desktop has got a folder tslib_arm which consists of compiled code for tslib (like bin, etc,lib etc..)
when i run ts_calibrate(./ts_calibrate) from /home/user2/Desktop/tslib_arm/bin then the calibration screen is coming and calibrating. and if i run ./ts_test , it gives options like drag and draw, which are also working fine.
But after closing these apps (ts_calibrate or ts_test) and if we check the touch on the Desktop or any application or normal operation touch seems to be uncalibrated only.
Why is it so???
Do i need to copy this tslib_arm or any other files from tslib_arm folder to system's rootfs location????

That's because your "Desktop or any application" most probably doesn't directly implement and use the API that tslib offers (to read touch input samples).
tslib includes documentation on how to use the filtered input in your environment.
What should always work, is using the "ts_uinput" daemon program that comes with tslib (just like "ts_calibrate" does). It's a driver for tslib that creates a (second) touchscreen input event device for you in /dev/input/. All you need to do is tell your "Desktop or application" to use it. All desktop environments have options to choose which input device you want to use.

Related

How to get ML Agents to run in Unity?

I've tried going through the documentation but the steps mentioned there aren't quite clear enough. Is there a good step by step video tutorial that can be helpful? The ones I saw on YouTube are pretty old and don't work with the latest updates on GitHub https://github.com/Unity-Technologies/ml-agents
This will help you to setup ml agent version 14.
https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md
I am currently using this version.
I suggest you create a new python environment for only this purpose.
Don't use pip install mlagents in your python environment terminal. This is not updated to 14 when i installed it, so use the link above to see complete guide but here's the important stuff.
Download zip file
Ml Agents master
When you extract this Zip, you should open this extracted folder .
Then open cmd at that location. Activate your python environment and follow these steps:
You should install the packages from the cloned repo rather than from PyPi. To do this, you will need to install ml-agents and ml-agents-envs separately.Open cmd inside ml-agents=master folder and activate your python environment then, From the repo's root directory, run:
cd ml-agents-envs
pip3 install -e ./
cd ..
cd ml-agents
pip3 install -e ./
It is very important that both packages are installed from same ml agent folder, this will not work if the version are not compatible. if installed from same folder both packages will have same version i.e. 14 in this case.
These two packages will help you to use predefined PPo and SAC algo.
I suppose you have installed 2018 or 2019 Unity. Open it and Goto File → Open project
Now in open dialog box select folder Project inside ml-agents-master folder that you have downloaded.
Sorry that most of the things are named project but don't be confused, earlier project was a folder inside ml-agents-master but after opening it you will see a Project toolbar. Follow Assets\ML-Agents\Examples\3DBall\Scenes now double click on 3Dball.
This will open a scene as you can see here. You can also see TFModels and Scirpts they are predefined neural network and code, respectively.
Select Agent in Hierarchy toolbar in left side (this will make change for only that instance of 3Dball it will be better to go to prefabs then double click on 3Dball this will open only one 3Dball whose settings will be applied on all instances now in hierarchy you will see only one 3Dball, now select it's Agent rest is same but now changes will affect all copies of 3Dball, prefabs are used to control all the copies this helps to train multiple agents at same time) then in right side Inspector view will open, inside Behaviors parameters you can see Model and a input in its box. Keep the agent selected otherwise this inspector view will disappear .
Now Goto TFModels folder, you will see a 3DBall file that looks like Neural network. Drag this to that Agent's Behavior parameters Model.
After Following all these steps Click on play option on top . Now the predefined model will start playing and you will see that it can balance the ball pretty well.
Now that you are able to see how trained model work and want to train again using predefined PPO and SAC, follow this
goto: ml-agents-master\config, here you will find a file trainer_config.yaml, now open cmd then activate your environment and enter code
mlagents-learn trainer_config.yaml --run-id=firstRun --train
When the message "Start training by pressing the Play button in the Unity Editor" is displayed on the screen, you can press the ▶️ button in Unity to start training in the Editor. You can press Ctrl+C to stop the training, and your trained model will be at models run-identifier → behavior_name.nn where behavior_name is the name of the Behavior Name of the agents corresponding to the model.
Move your model file into Project/Assets/ML-Agents/Examples/3DBall/TFModels/.
Open the Unity Editor, and select the 3DBall scene as described above.
Select the 3DBall prefab Agent object.
Drag the <behavior_name>.nn file from the Project window of the Editor to the Model placeholder in the Ball3DAgent inspector window.
Press the ▶️ button at the top of the Editor.
Now, for your own RL algorithm in python:
see this jupyter notebook it shows how we activate unity gym and get observations, rewards and reset environment. For this use can also create Unity executable, it's just creating exe files with some settings which you will find here
Hope this works without any issues. And Good Luck with this.

DllNotFoundException in while building desktop unity application using ARtoolkit

I am creating an augmented reality desktop application using Unity and ARToolkit. For test purposes I have created a single scene application to test the working of the ARToolkit, it runs perfectly in unity editor that is, the webcam and all is working correctly in unity editor. After building the application when I am opening the .exe file, it is not opening the webcam and is giving ARWrapper.dll dllnotfoundexception. HOw should I resolve it and how do I enable my laptop webcam in the .exe application file ??? Attached image is showing the problem at hand..
have a look at the documentation here:
http://artoolkit.org/documentation/doku.php?id=6_Unity:unity_on_windows
Looks like the ARWrapper.dll needs to be in the same directory as your app.
Best
[edit]
Excerpt from the documentation I mention above:
"
In spite of the ARWrapper.dll clearly being in the referred to folder, the Unity Editor may not be able to find a required dependent DLL (i.e. a DLL on which the ARWrapper DLL depends). Confusingly, the dependent DLLs must be present in same folder as the .exe file of the host application (the Unity Editor, in this case), which is typically C:\Program Files (x86)\Unity\Editor. The required DLLs are normally (at least since ARToolKit for Unity v2.0.3) installed by the ARToolKit for Unity installer, but if you are having difficulty, you can double check. Check that the following are present in that folder:
ARvideo.dll
pthreadVC2.dll
opencvcore246.dll - opencvflann246.dll
DSVL.dll
"

Windows10 script pin app to start menu

I have a Universal App that I'm sideloading in Windows10. I would like to create a script (Powershell, VBS, batch, etc) to pin it to the Start Menu.
I have found many examples of how to write a script that pins a desktop application (like this one: https://gallery.technet.microsoft.com/scriptcenter/Script-to-pin-items-to-51be533c ). I've tried using the script to pin the actual App .exe file located in C:\Program Files\WindowsApps... but that doesn't work. When I try and pin that way, I run DoIt(), which doesn't return anything. There are no error messages, but the tile is not pinned to the start menu.
My guess is that instead of using a ComObject to interact with the file system I need a ComObject that interacts with some sort of App manager. I'm not sure how to get a list of available ComObjects to tell if this is even a possibility.

Do I have to build my LabVIEW instrument driver under Program Files?

I'm trying to build a LabVIEW plug and play instrument driver project for a device we sell. I followed the instructions to create a project, and it created the project in with the LabVIEW program:
C:\Program Files\National Instruments\LabVIEW 2011\instr.lib
I suppose I could connect that folder to source control and just do all the work there, but it feels weird to be working under Program Files. When I tried to move the project folder out into my regular workspace folder, it broke all the subpalette files (*.mnu). I could recreate them, but I'm afraid they wouldn't work for our customers when they install the driver from the LabVIEW web site.
Is it possible to move a driver project around, or does it have to stay in the default location? If one of our customers has installed LabVIEW in a different location (say on drive D:) will the driver menus not work for them?
I'm not in favour of user.lib for SCC'd items. using several LabVIEW versions at a time is a big problem.
Here is my routine:
Create the instrument library and save all code in a folder starting with an underscore ('_') (_foo)
Create an .mnu file in the parent folder of '_foo' Mylib.mnu, add the icons you need.
With OpenG package builder I create an installer routine that placed the the mnu file and the folder in instr.lib
After a restart of LabVIEW the instrument driver shows up in the instruments palette.
If you keep the code in the same relative position to the mnu file there is no problem with missing VIs.
Ton
Instrument drivers are always located in the 'instr.lib' folder in the current LabVIEW version folder. There is an environmental path set up in LabVIEW for this intrument driver folder so it will always point to the correct drive for the installation of LabVIEW used.
You should keep the folder in the location used by the wizard to ensure that when distributed to your customers the sub palette menus point to the correct location and all the VIs link correctly.
I use source control for user.lib which is in a similar location and have no problems.

Developing for the iPhone outside Xcode

I'd like to develop and run my iPhone applications from the command line and my personal editor instead of having to use Xcode.
So far I've been able to edit all the files in Emacs and run xcodebuild in the project to compile/link/etc.
The next step would be to create a Makefile task to launch the iPhone Simulator with my current application. Any ideas of how can I do that?
Update: I'm not interested in XCode calling my editor, I just want to forget about the IDE as much as I can.
All you need to do is copy the built .app from wherever XCode puts it to ~/Library/Application Support/iPhone Simulator/[some version]/Applications/[somefolder]/.
Then, launch /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/Applications/iOS Simulator.app. Not sure how to get it to launch a specific application, but that'll take you to the home screen.
Note also that you can set up XCode to use external editors, even for source code. In this setting, you'd open XCode to look at the treeview displaying the files and other items making up your project, but once you double-click a sourcecode file it would open in e.g. Emacs.
There's a screencast over at Mac Developer Network demonstrating this: link
I doubt it. If you jailbreak your phone and install SSH on it you could set up something to >>copy the .app over wifi, but that's a fair bit of work. – Noah Witherspoon Jan 13 '09 at 5:24
I did all of my beginning iphone development work this way. Just ssh'ing over the binary executable and whatever other files you might need (after you locate the App folder on your phone) is actually much faster than installing the application from xCode. Note that I wasn't running the debugger.