Do you know good C/C++ library for Google Cloud Storage?
I can find Python library for it, but I can't find with C/C++ (or Objective-C).
GCS has a supported C++ client library. The source is here: https://github.com/googleapis/google-cloud-cpp
The full documentation is here: https://cloud.google.com/storage/docs/reference/libraries#client-libraries-install-cpp
Here's an example of downloading an object and counting the number of lines:
#include "google/cloud/storage/client.h"
#include <iostream>
namespace gcs = google::cloud::storage;
int countLines(std::string bucket_name, std::string object_name) {
// Create aliases to make the code easier to read.
namespace gcs = google::cloud::storage;
// Create a client to communicate with Google Cloud Storage. This client
// uses the default configuration for authentication and project id.
google::cloud::StatusOr<gcs::Client> client =
gcs::Client::CreateDefaultClient();
if (!client) {
std::cerr << "Failed to create Storage Client, status=" << client.status()
<< "\n";
return 1;
}
gcs::ObjectReadStream stream = client.ReadObject(bucket_name, object_name);
int count = 0;
std::string line;
while (std::getline(stream, line, '\n')) {
++count;
}
return count;
}
There is a list of Google's libraries (including Objective C) here.
In the Gnome tree there is an OAuth2 library written in C:
http://git.gnome.org/browse/librest/tree/
This is part of Gnome's librest package, a library that facilitates REST transactions. I have not used it myself, but here are a few observations:
It looks like you will need to use automake to build a .configure. The docs say to just run the configure script but the docs are pretty old.
It is still being developed (the most recent check in was December 2012).
If you do try it please report your experiences. (Thank you in advance!)
Related
I am customizing ICN (IBM Content Navigator) 2.0.3 and my requirement is to restrict user to upload files over 10mb and only allowed files are .pdf or .docx.
I know I have to extend / customize the AddContentItemDialog but there is very less detail on exactly how to do it, or any video on it. I'd appreciate if someone could guide.
Thanks
I installed the development environment but I am not sure how to extend the AddContentItemDialog.
public void applicationInit(HttpServletRequest request,
PluginServiceCallbacks callbacks) throws Exception {
}
I want to also know how to roll out the changes to ICN.
This can be easily extended. I would suggest to read the ICN red book for the details on how to do it. But it is pretty standard code.
Regarding rollout the code to ICN, there are two ways:
- If you are using plugin: just replace the Jar file on the server location and restart WAS.
- If you are using EDS: you need to redeploy the web service and restart WAS.
Hope this helps.
thanks
Although there are many ways to do this, one way indeed is tot extend, or augment the AddContentItemDialog as you qouted. After looking at the (rather poor IBM documentation) i figured you could probably use the onAdd event/method
Dojo/Aspect#around allows you to do exactly that, example:
require(["dojo/aspect", "ecm/widget/dialog/AddContentItemDialog"], function(aspect, AddContentItemDialog) {
aspect.around(AddContentItemDialog.prototype, "onAdd", function advisor(original) {
return function around() {
var files = this.addContentItemGeneralPane.getFileInputFiles();
var containsInvalidFiles = dojo.some(files, function isInvalid(file) {
var fileName = file.name.toLowerCase();
var extensionOK = fileName.endsWith(".pdf") || fileName.endsWith(".docx");
var fileSizeOK = file.size <= 10 * 1024 * 1024;
return !(extensionOK && fileSizeOK);
});
if (containsInvalidFiles) {
alert("You can't add that :)");
}else{
original.apply(this, arguments);
}
}
});
});
Just make sure this code gets executed before the actual dialog is opened. The best way to achieve this, is by wrapping this code in a new plugin.
Now on creating/deploying plugins -> The easiest way is this wizard for Eclipse (see also a repackaged version for newer eclipse versions). Just create a new arbitrary plugin, and paste this javascript code in the generated .js file.
Additionally it might be good to note that you're only limiting "this specific dialog" to upload specific files. It would probably be a good idea to also create a requestFilter to limit all possible uses of the addContent api...
I have experience using the static GTFS files which define the operating model for a specific public transport network. However my city has just release a real time feed for the bus locations and other status updates for the network.
My question is, how should I view this feed in real time and record the feed into a database. The link to the real time feed is as follows: https://gtfsrt.api.translink.com.au/
The GTFS-realtime spec now includes code samples for parsing GTFS-realtime data in a variety of languages:
https://developers.google.com/transit/gtfs-realtime/code-samples
It's a good place to start when it comes to parsing GTFS-realtime data in your favorite language.
I needed to install google's protocol buffer, then compile the gifts-realtime.proto with the protocol buffer to generate code which can then read the API source.
Install Nugget Package Google.Protobuf
PM> Install-Package Google.Protobuf -Version 3.4.1
private FeedMessage _feedMessage;
using (MemoryStream protobufMemoryStream = new MemoryStream())
using (Stream protobufStream = await _httpClient.GetStreamAsync("", "http://gtfs.ovapi.nl/new/vehiclePositions.pb"))
{
protobufStream.CopyTo(protobufMemoryStream);
protobufMemoryStream.Position = 0;
_feedMessage = Serializer.Deserialize<FeedMessage>(protobufMemoryStream);
}
In _feedMessage you have deserialize GTFS RealTime Model to persist data into database.
Is there a way to access Html5 file api in Fire Fox addon sdk in the content script?
This is needed in order to store user added words and their meanings. The data can grow large and so local storage isn't an option.
window.requestFileSystem3 = window.requestFileSystem || window.webkitRequestFileSystem;
gives me the error TypeError: window.requestFileSystem3 is not a function.
I am asking this because i am porting this code from a Google Chrome Extension which allows accessing the file api in a content script.
Additional Questions
1) If HTML5 File API is not allowed then should i use file module?
2) Does the file module allow access to any file on the file system as opposed to the Html5 file api which only access to a sandboxed access to file system?
3) Assuming i have to use file module what would be the best location to store my files ( like the user profile directory or extension directory ) and how would i get this path in code.
I apologize for so many sub questions inside this questions. Google wasn't very helpful regarding this topic.
Any sample code would be very helpful.
Firefox doesn't support writing files via File API yet and even when this will be added it will probably be accessible to web pages only and not extensions. In other words: yes, if you absolutely need to write to files then you should use low-level APIs. You want to store your data in the user profile directory (there is no extension directory, your extension is usually installed as a single packed file). Something like this should work to write a file:
var file = require("sdk/io/file");
var profilePath = require("sdk/system").pathFor("ProfD");
var filePath = file.join(profilePath, "foo.txt");
var writer = file.open(filePath, "w");
writer.writeAsync("foo!", function(error)
{
if (error)
console.log("Error: " + error);
else
console.log("Success!");
});
For reference: sdk/io/file, sdk/system
You could use TextReader.read() or file.read() to read the file. Unfortunately, Add-on SDK doesn't seem to support asynchronous file reading so the read will block the Firefox UI. The only alternative would be importing NetUtil and FileUtils via chrome authority, something like this:
var {components, Cu} = require("chrome");
var {NetUtil} = Cu.import("resource://gre/modules/NetUtil.jsm", null);
var {FileUtils} = Cu.import("resource://gre/modules/FileUtils.jsm", null);
NetUtil.asyncFetch(new FileUtils.File(filePath), function(stream, result)
{
if (components.isSuccessCode(result))
{
var data = NetUtil.readInputStreamToString(stream, stream.available());
console.log("Success: " + data);
}
else
console.log("Error: " + result);
});
I am looking to utilize the ICU library for transliteration, but I would like to provide a custom transliteration file for a set of specific custom transliterations, to be incorporated into the ICU core at compile time for use in binary form elsewhere. I am working with the source of ICU 4.2 for compatibility reasons.
As I understand it, from the ICU Data page of their website, one way of going about this is to create the file trnslocal.mk within ICUHOME/source/data/translit/ , and within this file have the single line TRANSLIT_SOURCE_LOCAL=custom.txt.
For the custom.txt file itself, I used the following format, based on the master file root.txt:
custom{
RuleBasedTransliteratorIDs {
Kanji-Romaji {
file {
resource:process(transliterator){"custom/Kanji_Romaji.txt"}
direction{"FORWARD"}
}
}
}
TransliteratorNamePattern {
// Format for the display name of a Transliterator.
// This is the language-neutral form of this resource.
"{0,choice,0#|1#{1}|2#{1}-{2}}" // Display name
}
// Transliterator display names
// This is the English form of this resource.
"%Translit%Hex" { "%Translit%Hex" }
"%Translit%UnicodeName" { "%Translit%UnicodeName" }
"%Translit%UnicodeChar" { "%Translit%UnicodeChar" }
TransliterateLATIN{
"",
""
}
}
I then store within the directory custom the file Kanji_Romaji.txt, as found here. Because it uses > instead of the → I have seen in other files, I converted each entry appropriately, so they now look like:
丁 → Tei ;
七 → Shichi ;
When I compile the ICU project, I am presented with no errors.
When I attempt to utilize this custom transliterator within a testfile, however (a testfile that works fine with the in-built transliterators), I am met with the error error: 65569:U_INVALID_ID.
I am using the following code to construct the transliterator and output the error:
UErrorCode status = U_ZERO_ERROR;
Transliterator *K_R = Transliterator::createInstance("Kanji-Romaji", UTRANS_FORWARD, status);
if (U_FAILURE(status))
{
std::cout << "error: " << status << ":" << u_errorName(status) << std::endl;
return 0;
}
Additionally, a loop through to Transliterator::countAvailableIDs() and Transliterator::getAvailableID(i) does not list my custom transliteration. I remember reading with regard to custom converters that they must be registered within /source/data/mappings/convrtrs.txt . Is there a similar file for transliterators?
It seems that my custom transliterator is either not being built into the appropriate packages (though there are no compile errors), is improperly formatted, or somehow not being registered for use. Incidentally, I am aware of the RuleBasedTransliterator route at runtime, but I would prefer to be able to compile the custom transliterations for use in any produced binary.
Let me know if any additional clarification is necessary. I know there is at least one ICU programmer on here, who has been quite helpful in other posts I have written and seen elsewhere as well. I would appreciate any help I can find. Thank you in advance!
Transliterators are sourced from CLDR - you could add your transliterator to CLDR (the crosswire directory contains it in XML format in the cldr/ directory) and rebuild ICU data. ICU doesn't have a simple mechanism for adding transliterators as you are trying to do. What I would do is forget about trnslocal.mk or custom.txt as you don't need to add any files, and simply modify root.txt - you might file a bug if you have a suggested improvement.
For the needs of a project, i want to print over the LPT1 in specific locations, this will print a document in a dot matrix printer where i should print values in the places they should go. I really hate going back, and i don't have any idea where to start. Internet has no specific information about printing in LPT port with C# and especially how to send the values in specific locations while printing. Is there any good example? tutorial for this? would be a life savior.
I could suggest one thing to make your life easier, install a generic text printer driver (this comes as standard) and set that to the LPT1 port. Then you can simply open 'LPT1' and send escape code sequences to specify font type (bold/italic), emphasized, font pitch etc. I don't know if the resources would be required. But I would imagine it would be something like this:
using (System.IO.StreamWriter sr = new System.IO.StreamWriter(#"\\.\LPT1"))
{
sr.Write(0x1b);
sr.Write('k');
sr.Write('1');
sr.Write("Hello"); // print in Sans Serif
sr.WriteLine();
sr.Flush();
}
Resources:
Printing to a zebra printer using VB.NET (This can be easily translated to C# or compile it to a DLL and reference it in your C# project)
An article on MSDN on how to interface to LPT1
Here is an extensive list of info pertaining to Parallel Port. (look further down near the section titled 'Programming Tools for Port I/O and Interrupts'), discussing the usage of this DLL called inpout32.
Here is another article on MSDN that shows how to do raw printing.
Edited # 2017-07-12: Updated the Parallel Port link to use the Wayback Archive Machine.
Doesn't the printer in question have a windows print driver? If so, it doesn't matter that it is printing over LPT1 or not, it is just using the standard Print stuff.
Similar question: Dot Matrix printing in C#?
If your printer has drivers for Windows, then you can use standard printing techniques. See Petzold's book Programming Microsoft Windows with C# for a good intro.
Hey I just got a dot matrix printer in 2019 and you can still buy the ribbons for £5.
using System.IO;
using System.Runtime.InteropServices;
using Microsoft.Win32.SafeHandles;
public class ParallelPrinter
{
[DllImport("kernel32.dll", SetLastError = true)]
static extern SafeFileHandle CreateFile(string lpFileName, FileAccess dwDesiredAccess, uint dwShareMode, IntPtr lpSecurityAttributes, FileMode dwCreationDisposition, uint dwFlagsAndAttributes, IntPtr hTemplateFile);
public static void Print(string text)
{
using (SafeFileHandle fileHandle = CreateFile("LPT1:", FileAccess.Write, 0, IntPtr.Zero, FileMode.OpenOrCreate, 0, IntPtr.Zero))
{
if (fileHandle.IsInvalid == true)
throw new ApplicationException("Printer is Invalid");
using (FileStream stream = new FileStream(fileHandle, FileAccess.Write))
{
using (StreamWriter writer = new StreamWriter(stream, Encoding.ASCII))
{
writer.Write(text);
}
}
}
}
}
No drivers you just need a parallel port, you can get a PCI-e card if you don't have one.