Can I playback a CAN log file to a CAN input on V2X device? - canoe

I am currently working on implementing a Cohda DSRC V2X device that takes a vehicle CAN input. I don't have access to the vehicle and want to simulate the input with a prerecorded CAN log file from it. If possible, we want to playback the CAN log file into the V2X device. I was directed to look into Vector CANoe/CANalyzer products. After looking into their products, documentation, forums, and FAQs, I have not been able to determine if this is possible. So, can this be done, and if so, how?

Perhaps you could use a replay block in Vector CANoe environment if the log file is in Vector supported format and you do not use any secure communication.
For reference please see:
Vector Public Knowledge Area on logging formats, especially the note on the bottom.

Related

Retrieving recorded images stored in AXIS recorder located in remote area

I am planning to buy and use AXIS Camera Station S2208 Appliance
, and seeking way to retrieve images stored in this recorder in remote area via API. (not retrieve from camera, but recorder)
I guess VAPIX (or ONVIF) API is responsible for this task, but not sure where the exact description is (I looked over VAPIX-library page, but found no clue).
Questions are as follows
To begin with, is it possible to retrieve images from recorder via VAPIX (or ONVIF) ?
If it is possible, where is the description in VAPIX-library page (from Network video, Applications, Audio systems, Physical access control or Radar)?
If not, are there any ways to do it?
I also searched in AXIS Camera Station User Manual, and found out Developer API. However, it was not clear to me about details.
I posted here because I couldn't get answer from official page.
Any help would be great. Thanks!

Connecting CAD model (Solidworks, AutoCAD or CATIA) with realtime measurements from Raspbery Pi or Arduino Sensor

To present my question I will simplify my example.
I will connect a sprocket on a step motor and measure acceleration with an accelerometer. The data will be captured by using either an Arduino or Raspberry pi sensor setup. The measurements will then be stored in a cloud-based environment or somehow similar and be send to the CAD model (that's the idea).
Basically what I would like to achieve is to:
connect the movement of the step motor with the SW/CATIA/AutoCAD model (if the physical sprocket is spinning, so is the one in the CAD model),
in case that the measurements identify a problem in the assembly, the critical/weak component would be somehow highlighted inside the CAD model.
Has anyone an idea how this could be done or if it is even possible?
I think is definitely possible (and quite easy) in CATIA (which is the only one I know).
CATIA has COM Automation exposed (i.e. you can interact with it like you do with MS Office apps) and naturally you would to it writing VBA project in the same fashion .
But VBA projects have a lot of limitations, and I think it would be almost impossible to have a background, constantly running process such as the one you describe.
If you switch to Python, you'll be able to:
access all python functionalities, in the scope you describe I think you'll have endless possibilities in getting data from a sensor and handle them, then send to the CAD.
run the script whenever you want, totally independently from VBA editor and CATIA macro related stuff. It will just send commands to CATIA and it will instantly execute.
have everything in real time, because if you enable Automatic Update in CATIA, each command sent via COM will be immediately executed and the Part or Product updated accordingly
I already translated a complex project from VBA to Python with success, it interacts seamlessly with CATIA and Excel at the same time and transfer data between them.
It is definitely possible, look what has already been done with Solidworks and MS Kinect.
All you need to do is identify the component that you want to affect, calculate new transform based on your sensor input and assign that transform to the component.
To highlight you can either change color of the body or use built it Highlight method.
That being said I wouldn't recommend this as your first Solidworks project.

(Bluemix) Conversion of audio file formats

I've created an Android Application and I've connected different watson services, available on Bluemix, to it: Natural Language Classifier, Visual Recognition and Speech to Text.
1) The first and the second work well; I've a little problem with the third one about the format of the audio. The app should register a 30sec audio, save it on memory and send to the service to obtain the corresponding text.
I've used an instance of the class MediaRecorder to register the file. It works, but the available Output formats are AAC_ADTS, AMR_WB, AMR_NB, MPEG_4, THREE_GPP, RAW_MR and WEBM.
The service, differently, accepts in input these formats: FLAC, WAV, PCM.
What is the best way to convert the audio file from the first set of outputs to the second one? Is there a simple method to do that? For example, from THREE_GPP or MPEG_4 to WAV or PCM.
I've googled searching infos and ideas, but I've found only few and long-time methods, not well understood.
I'm looking for a fast method, because I would make the latency of conversion and elaboration by the service as short as possible.
Is there an available library that does this? Or a simple code snippet?
2) One last thing:
SpeechResults transcript = service.recognize(audio, HttpMediaType.AUDIO_WAV);
System.out.println(transcript);
"transcript" is a json response. Is there a method to directly extract only the text, or should I parse the json?
Any suggestion will be appreciated!
Thanks!
To convert the audio records in different formats/encodings you could:
- find an audio encoder lib to include into your app which supports the required libs but it could very heavy to run on a mobile device (if you find the right lib)
- develop an external web application used to send your record, make it encoded and returned as a file or a stream
- develop a simple web application working like a live proxy that gets the record file, makes a live conversion of the file and send to Watson
Both the 2nd option and the 3rd one expects to use an encoding tool like ffmpeg.
The 3rd one is lighter to develop but a little bit more complex but could allow you to save 2 http request from you android device

Text to Speech conversion in iPhone using Nuance SDK: Generate .wav file

I am converting text to speech using Nuance SDK and it works fine.
I want to mail the the output to the user as a file, "voice.wav" for example.
Being new to this field, I'm not sure, does this text to speech process create an output file?
I don't see an output file, does it exist?
Can I make it generate one?
Thanks in advance.
At this time, the SDKs/libraries don't expose access to the raw audio data. This is done in an effort to guarantee the optimal audio subsystem as well as simplifying the process of speech-enabling apps.
Depending on the plan you're enrolled in, you may be able to use the HTTP service, which means you will have to construct your own audio layer. That said, this is your best bet for getting access to the audio data if you need it.

I'm trying to program a unique app, and use voice command to trigger specific functions within the app

If anyone can help me with this, I'd be eternally in their debt.
Without getting bogged down in details, I'm trying to program an app so
that, for instance, while the application is currently launched, if I say the words,
"activate function A", a specific function which already exists in my app, is activated.
Have I explained myself clearly? In other words, on the screen of the phone is a button
which says "function A". When the software is "armed" and in listening mode, I want
the user to have the ability to simply say the words "activate function A",
(or any other phrase of my choice) and the screen option will be selected without requiring
the user to press the button with their hand, but rather, the option is selected/activated
via voice command.
My programmers and I have faced difficulties incorporating this new voice command capability,
even though it is obviously possible to do google searches with voice command, for instance.
Other voice command apps are currently in circulation, such as SMS dictation apps,
email writing apps, etc, so it is clearly possible to create voice command apps.
Does anyone know if this is possible, and if so, do you have advice on how to implement
this function?
QUESTION 2
Assuming that we are unable to activate function A via voice command, is it possible
to use voice command to cause the phone to place a call, and this call is received
by our server? The server then 'pings' the iPhone and instructs it to activate function A?
For this workaround to work, I would need the ability to determine the exact phrase.
In other words, the user can't be forced to use the word "call function A". I need the
ability to select the phrase which launches the function.
Hopefully I've been clear.
In other words, as a potential workaround to the obstacles we've been facing regarding
using voice command to activate a specific function within our app, is it possible
to harness the voice command capability already present in the phone? aka, to place
a phone call? And then this call is received by our server, and the server
accordingly pings the phone which placed the call, and instructs it to activate the function?
I obviously understand the necessity for the app to be currently launched, before it
would be possible for my application to receive the instruction from the server.
If someone can help me to solve this vexing problem, it is not hyperbole to say that
you would change my life!
Thanks so much in advance for any help one of you kind souls can provide!!!
Michael
I don't believe the iPhone comes with any built in speech recognition functions. Consider speaking to Nuance about buying and embedding one of their speech recognition engines. They have DragonDictate for iPhone, but they also provide a fair amount of other recognition engines that serve different functions. Embedded solutions is clearly one of their areas of expertise.
Your other path of pushing the audio to your server may be more involved than you expect. Typically this process involves end-pointing (when is speech present) and identification of basic characteristics so the raw stream doesn't need to be passed. Again, investigation into the speech recognition engine you intend to use may provide you with the data processing details you need. Passing continuous, raw voice from all phones to your servers is probably not going to be practical.