How to train a neural network using intel integrated graphics - neural-network

My computer has a Intel Xeon E3-1200 v2/3rd Gen Core processor Graphics Controller. I want to train a neural network using the integrated graphics. There is no need for speed-up. What should I do?

Since Keras use Tensorflow under the hood Tensorflow's GPU support needs Nvidia Cuda and CuDNN packages installed. For GPU accelerated training you will need a dedicated GPU. Intel onboard graphics can't be used for that purpose.
You can check the support https://www.tensorflow.org/install/gpu

Related

Image segmentation with raspberry pi

I have been trying to perform image segmentation with raspberry Pi. I searched for different pre-trained models to perform it and came across tensorflow lite, it has a deeplab model in it, it is very less in size (2.7 Mb) and can be used for IOT devices. But in my case, I have a custom dataset and I need to train the model on my dataset (i.e training deeplab with custom dataset). My issue is raspberry Pi has less RAM and storage (comparatively). So, if I train deeplab with the custom dataset, can I run it on raspberry Pi. If so, is there any tutorial or a research paper about it?
You can use this training script. Clone the repository and run model.py from model/research/deeplab.
I wouldn´t train the model on the Raspberry Pi, because it´s damn slow. A better approach would be to train it on a PC (maybe with GPU support) and export the model to the Raspberry Pi.

How to use tensorflow-gpu in unity mobile

I'm prototyping mobile ml application within Unity engine.
I have trained tensorflow graph (.pb) and I want to run the model in unity mobile. (both android and ios)
With OpenCVForUnity plugin, with dnn module, I can run tensorflow graph in mobile. But the problem is that's running on CPU.
I need GPU based solution and it seems that OpenCVForUnity isn't proper approach for that.
So any idea for running graph on GPU in unity mobile enviroment?
You might want to use Barracuda, which will allow you to convert a tensorflow model and use it in cross-platform Unity applications. Unity ML-Agents uses Barracuda, so you could use their code as a reference for how to utilize your neural network.

Video Converter that support GPU acceleration and command line

Is there any paid/licensed video converter that support GPU acceleration and command line features? I need the command line for automation purpose and GPU acceleration to convert the video in 4K in commercial way.

Is number recognition on iPhone possible in real-time?

I need to recognise numbers from the camera image on iPhone, in real-time. I know there will be no more than 5 digits on the image.
Is this problem realistic to solve given the computational specifications of the iPhone?
Does anyone have any experience using the Tesseract OCR library, and do you think it could be solved by using it?
The depends on your definition of "real-time", but yes, it should be possible to do relatively fast recognition of just the digits 0-9 on an iPhone 4, particularly if you can fonts, lighting conditions, etc. that they will appear in.
I highly recommend reading the article on how Sudoku Grab does its recognition of puzzles using the iPhone camera. In their case, a trained neural network was used to identify the digits, which should be reasonably simple and fast on modern iOS hardware.
The current recognition libraries out there, like OpenCV, will use the iPhone's CPU to do the processing. I've heard that they can do even more complex tasks like facial recognition fast enough to use with video sources while showing a minimal amount of stutter.
For even better performance, I believe that there's a lot of potential in the programmable GPUs on the newer iOS devices. In my benchmarks, I saw a 14X - 28X speedup when using the iPhone 4's GPU for simple image processing. While few people are looking at this right now, something like Sudoku Grab's neural network should be a parallel enough process to benefit from running on the GPU.
It should be computationally possible. There are apps that can get a bar code in real time and also an app that does real time translation. (Word Lens). I'm not sure what libraries they use, however.
YES it is possible using the tesseract engine
Here is the sample code if you like to check...
https://github.com/nolanbrown/Tesseract-iPhone-Demo
There is free SDK for that: http://rtrsdk.com/ Supports both iOS and Andorid, works in real-time, helps you capture any text, numbers should not be a problem.
Disclaimer: I work for ABBYY
Yes. Bender can help you with that. It lets you build and run neural nets on iOS. As it uses Metal under the hood, it runs fast and smooth. It also supports running TensorFlow models directly.
So you can run in Bender an existing model in TensorFlow trained for digit recognition Handwritten Digit Recognition using Convolutional Neural Networks in Python with Keras if you need help
Disclaimer: I worked on this project.

Convolving an image with OpenGL ES on iPhone: possible?

I've googled around a few times, but I have not gotten a straight answer. I have a matrix that I would like to convolve with a discrete filter (e.g. the Sobel operator for edge detection). Is it possible to do this in an accelerated way with OpenGL ES on the iPhone?
If it is, how how? If it is not, are there other high-performance tricks I can use to speed up the operation? Wizardly ARM assembly operations that can do it fast? Ultimately I want to perform as fast of a convolution as possible on an iPhone's ARM processor.
You should be able to do this using programmable shaders under OpenGL ES 2.0. I describe OpenGL ES 2.0 shaders in more detail in the video for my class on iTunes U.
Although I've not done image convolution myself, I describe some GPU-accelerated image processing for Mac and iOS here. I present a sample application that uses GLSL shaders (based on Core Image filters developed by Apple) that does realtime color tracking from the iPhone's camera feed.
Since I wrote this, I've created an open source framework based on the above example which has built-in image convolution filters, ranging from Sobel edge detection to custom 3x3 convolution kernels. These can run up to 100X faster than CPU-bound implementations.
However, if you were to do this on the CPU, you might be able to use the Accelerate framework to run some of the operations on the iPhone's NEON SIMD unit. In particular, FFT operations (which are usually a key component in image convolution filters, or so I've heard) can get a ~4-5X speedup by using the routines Apple provides here.