"Forbidden" error when uploading file through Google Cloud Storage API - google-cloud-storage

I am using the "google-api-services-storage-v1beta2-rev5-java-1.15.0-rc.zip" Google Cloud Storage library together with the "StorageSample.java" sample program from here
I have followed the sample program's setup instructions and have set up the "client_secrets.json" and "sample_settings.json" files. The sample program compiles OK but runs only partially OK.
I have modified the "uploadObject" method of the "StorageSample.java" program so that it uploads a test file created by me (rather than upload a randomly generated file). The program runs OK in the following methods :
tryCreateBucket();
getBucket();
listObjects();
getObjectMetadata();
However, when running the "uploadObject(true)" method, I get the following error
================== Uploading object. ==================
Forbidden
My modified "uploadObject" method is listed below :
private static void uploadObject(boolean useCustomMetadata) throws IOException {
View.header1("Uploading object.");
File file = new File("My_test_upload_file.txt");
if (!file.exists() || !file.isFile()) {
System.out.println("File does not exist");
System.exit(1);
}
InputStream inputStream = new FileInputStream(file);
long byteCount = file.length();
InputStreamContent mediaContent = new InputStreamContent("application/octet-stream", inputStream);
mediaContent.setLength(byteCount);
StorageObject objectMetadata = null;
if (useCustomMetadata) {
List<ObjectAccessControl> acl = Lists.newArrayList(); // empty acl (seems default acl).
objectMetadata = new StorageObject()
.setName("myobject")
.setMetadata(ImmutableMap.of("key1", "value1", "key2", "value2"))
.setAcl(acl)
.setContentDisposition("attachment");
}
Storage.Objects.Insert insertObject = storage.objects().insert("mybucket", objectMetadata, mediaContent);
if (!useCustomMetadata) {
insertObject.setName("myobject");
}
if (mediaContent.getLength() > 0 && mediaContent.getLength() <= 2 * 1000 * 1000 /* 2MB */) {
insertObject.getMediaHttpUploader().setDirectUploadEnabled(true);
}
insertObject.execute();
}
In the 1st run of the program, a bucket is created and I get the "Forbidden" error when uploading my created test file. In subsequent runs, the "Forbidden" errors persist.
I think that as the bucket is created by the program, the program should have enough access right to upload a file to that bucket.
Is there any setup / operation that I have missed ? Thanks for any suggestion.

Oh, what a careless mistake. I have forgotten to change the "mybucket" name to my created bucket's name.
The program now runs OK.

Related

Google Cloud Storage atomic creation of a Blob

I'm using haddop-connectors
project for writing BLOBs to Google Cloud Storage.
I'd like to make sure that a BLOB with a specific target name that is being written in a concurrent context is either written in FULL or not appearing at all as visible in case that an exception has occurred while writing.
In the code below, in case that that an I/O exception occurs, the BLOB written will appear on GCS because the stream is being closed in finally:
val stream = fs.create(path, overwrite)
try {
actions.map(_ + "\n").map(_.getBytes(UTF_8)).foreach(stream.write)
} finally {
stream.close()
}
The other possibility would be to not close the stream and let it "leak" so that the BLOB does not get created. However this is not really a valid option.
val stream = fs.create(path, overwrite)
actions.map(_ + "\n").map(_.getBytes(UTF_8)).foreach(stream.write)
stream.close()
Can anybody share with me a recipe on how to write to GCS a BLOB either with hadoop-connectors or cloud storage client in an atomic fashion?
I have used reflection within hadoop-connectors to retrieve an instance of com.google.api.services.storage.Storage from the GoogleHadoopFileSystem instance
GoogleCloudStorage googleCloudStorage = ghfs.getGcsFs().getGcs();
Field gcsField = googleCloudStorage.getClass().getDeclaredField("gcs");
gcsField.setAccessible(true);
Storage gcs = (Storage) gcsField.get(googleCloudStorage);
in order to have the ability to make a call based on an input stream corresponding to the data in memory.
private static StorageObject createBlob(URI blobPath, byte[] content, GoogleHadoopFileSystem ghfs, Storage gcs)
throws IOException
{
CreateFileOptions createFileOptions = new CreateFileOptions(false);
CreateObjectOptions createObjectOptions = objectOptionsFromFileOptions(createFileOptions);
PathCodec pathCodec = ghfs.getGcsFs().getOptions().getPathCodec();
StorageResourceId storageResourceId = pathCodec.validatePathAndGetId(blobPath, false);
StorageObject object =
new StorageObject()
.setContentEncoding(createObjectOptions.getContentEncoding())
.setMetadata(encodeMetadata(createObjectOptions.getMetadata()))
.setName(storageResourceId.getObjectName());
InputStream inputStream = new ByteArrayInputStream(content, 0, content.length);
Storage.Objects.Insert insert = gcs.objects().insert(
storageResourceId.getBucketName(),
object,
new InputStreamContent(createObjectOptions.getContentType(), inputStream));
// The operation succeeds only if there are no live versions of the blob.
insert.setIfGenerationMatch(0L);
insert.getMediaHttpUploader().setDirectUploadEnabled(true);
insert.setName(storageResourceId.getObjectName());
return insert.execute();
}
/**
* Helper for converting from a Map<String, byte[]> metadata map that may be in a
* StorageObject into a Map<String, String> suitable for placement inside a
* GoogleCloudStorageItemInfo.
*/
#VisibleForTesting
static Map<String, String> encodeMetadata(Map<String, byte[]> metadata) {
return Maps.transformValues(metadata, QuickstartParallelApiWriteExample::encodeMetadataValues);
}
// A function to encode metadata map values
private static String encodeMetadataValues(byte[] bytes) {
return bytes == null ? Data.NULL_STRING : BaseEncoding.base64().encode(bytes);
}
Note in the example above, that even if there are multiple callers trying to create a blob with the same name in parallel, ONE and only ONE will succeed in creating the blob. The other callers will receive 412 Precondition Failed.
GCS objects (blobs) are immutable 1, which means they can be created, deleted or replaced, but not appended.
The Hadoop GCS connector provides the HCFS interface which gives the illusion of appendable files. But under the hood, it is just one blob creation, GCS doesn't know if the content is complete or not from the application's perspective, just as you mentioned in the example. There is no way to cancel a file creation.
There are 2 options you can consider:
Create a temp blob/file, copy it to the final blob/file, then delete the temp blob/file, see 2. Note that there is no atomic rename operation in GCS, rename is implemented as copy-then-delete.
If your data fits into the memory, first read up the stream and buffer the bytes in memory, then create the blob/file, see 3.
GCS connector should also work with the 2 options above, but I think GCS client library gives you more control.

How to use sendFile method for sending the file located on internet?

I want to use Vert.x routingContext.response().sendFile method to read the file from internet and send it to some handler.
I have tried to use routingContext.response().sendFile for files located on my local system which works fine but instead of local system file when I am using file located on internet, I am getting error java.io.FileNotFoundException
String filename = "http://www.awitness.org/prophecy.zip";
routingContext.response().sendFile(filename, asr->{
if(asr.succeeded()) {
System.out.println("success....");
} else {
System.out.println("Something went wrong " + asr.cause());
}
});
Getting this output:
Something went wrong java.io.FileNotFoundException
That's because sendFile() takes local file path as argument.
Best solution would be to download this file, and serve it from your application.
Worse solution is to download this file on demand, save it using vertx.fileSystem().createTempFile(), and still serve it locally.
Now, for the sake of the argument, let's decided that you would like to go down the second path. How would you do that? You can try something like this:
final Vertx vertx = Vertx.vertx();
final Router router = Router.router(vertx);
WebClient c = WebClient.create(vertx);
String temp = vertx.fileSystem().createTempFileBlocking("", "");
c.get("www.awitness.org", "/prophecy.zip").send(r -> {
if (r.succeeded()) {
Buffer buffer = r.result().body();
vertx.fileSystem().writeFileBlocking(temp, buffer);
}
});
router.route("/").produces("application/zip").handler(ctx -> {
ctx.response().sendFile(temp);
});
I'm using blocking APIs only for the sake of simplicity. Correct ones are the async ones.

DxlImporter inside a loop throws error " DXL importer operation failed"

I am having a java agent which loops through the view and gets the attachment from each document, The attachment is nothing but the .dxl file containing the document xml data. I am extracting the file at some temp directory and trying import the extracted .dxl as soon as it get extracted.
But the problem here is ,it only imports or works on first document's attachment in the loop and throws the error in java debug console
NotesException: DXL importer operation failed
at lotus.domino.local.DxlImporter.importDxl(Unknown Source)
at JavaAgent.NotesMain(Unknown Source)
at lotus.domino.AgentBase.runNotes(Unknown Source)
at lotus.domino.NotesThread.run(Unknown Source)
My java Agent code is
public class JavaAgent extends AgentBase {
static DxlImporter importer = null;
public void NotesMain() {
try {
Session session = getSession();
AgentContext agentContext = session.getAgentContext();
// (Your code goes here)
// Get current database
Database db = agentContext.getCurrentDatabase();
View v = db.getView("DXLProcessing_mails");
DocumentCollection dxl_tranfered_mail = v.getAllDocumentsByKey("dxl_tranfered_mail");
Document dxlDoc = dxl_tranfered_mail.getFirstDocument();
while(dxlDoc!=null){
RichTextItem rt = (RichTextItem) dxlDoc.getFirstItem("body");
Vector allObjects= rt.getEmbeddedObjects();
System.out.println("File name is "+ allObjects.get(0));
EmbeddedObject eo = dxlDoc.getAttachment(allObjects.get(0).toString());
if(eo.getFileSize()>0){
eo.extractFile(System.getProperty("java.io.tmpdir") + eo.getName());
System.out.println("Extracted File to "+System.getProperty("java.io.tmpdir") + eo.getName());
String filePath = System.getProperty("java.io.tmpdir") + eo.getName();
Stream stream = session.createStream();
if (stream.open(filePath) & (stream.getBytes() >0)) {
System.out.println("In If"+System.getProperty("java.io.tmpdir"));
importer = session.createDxlImporter();
importer.setDocumentImportOption(DxlImporter.DXLIMPORTOPTION_CREATE);
System.out.println("Break Point");
importer.importDxl(stream,db);
System.out.println("Imported Sucessfully");
}else{
System.out.println("In else"+stream.getBytes());
}
}
dxlDoc = dxl_tranfered_mail.getNextDocument();
}
} catch(Exception e) {
e.printStackTrace();
}
The code executes till it prints "Break Point" and throws the error but the attachment get imported for first time
In other case if i hard code the filePath for the specific dxl file from file system it imports the dxl as document in the database with no errors
I am wondering if it is the issue of the stream passed doesn't get completes and the next loop executes.
Any kind of suggestion will be helpful.
I can't see any part where your while loop would move on from the first document.
Usually you would have something like:
Document nextDoc = dxl_tranfered_mail.getNextDocument(dxlDoc);
dxlDoc.recycle();
dxlDoc = nextDoc;
Near the end of the loop to advance it to the next document. As your code currently stands it looks like it would never advance, and always be on the first document.
If you do not know about the need to 'recycle' domino objects I suggest you have a search for some blog posts articles that explain the need to do so.
It is a little complicated but basically, the Java Objects are just a 'wrapper' for the the objects in the C API.
Whenever you create a Domino Object (such as a Document, View, DocumentCollection etc.) a memory handle is allocated in the underlying 'C' layer. This needs to be released (or recycled) and it will eventually do so when the session is recycled, however when your are processing in a loop it is much more important to recycle as you can easily exhaust the available memory handles and cause a crash.
Also it's possible you may need to close (and recycle) each Stream after you a finished importing each file
Lastly, double check that the extracted file that is causing an exception is definitely a valid DXL file, it could simply be that some of the attachments are not valid DXL and will always throw an exception.
you could put a try/catch within the loop to handle that scenario (and report the problem files), which will allow the agent to continue without halting

Error in uploading a file using Jersey rest service

I am using jersey for building rest service which will upload a file. But I am facing problem in writing a file to required location. Java throws a system cannot find specified path error. Here is my Web service :
#POST
#Path("/fileupload")
#Consumes(MediaType.MULTIPART_FORM_DATA)
public Response uploadFile(#FormDataParam("file")InputStream fileUploadStream, #FormDataParam("file")FormDataContentDisposition fileDetails) throws IOException{
StringBuilder uploadFileLocation= new StringBuilder();
uploadFileLocation.append("c:/logparser/webfrontend/uploads");
uploadFileLocation.append("/"+dateFormat.format(Calendar.getInstance().getTime()));
uploadFileLocation.append("/"+fileDetails.getFileName());
writeToFile(fileUploadStream, uploadFileLocation.toString());
return Response.status(200).entity("File saved to " + uploadFileLocation).build();
}
private void writeToFile(InputStream uploadInputStream, String uploadFileLocation)
{
log.debug("UploadService , writeToFile method , start ()");
try{
int read = 0;
byte[] bytes = new byte[uploadInputStream.available()];
log.info("UploadService, writeToFile method , copying uploaded files.");
OutputStream out = new FileOutputStream(new File(uploadFileLocation));
while ((read = uploadInputStream.read(bytes)) != -1)
{
out.write(bytes, 0, read);
}
out.flush();
out.close();
}
catch(Exception e)
{
log.error("UploadService, writeToFile method, error in writing to file "+e.getMessage());
}
}
From looking at just the code (it's usually helpful to include the exception and stack trace), you're trying to write to a directory based on a timestamp which doesn't exist yet. Try adding a call to File.mkdir/mkdirs. See this question/answer: FileNotFoundException (The system cannot find the path specified)
Side note - Unless you have a reason not to, I'd consider using something like Apache commons-io(FileUtils.copyInputStreamToFile) to do the writing.

JbossTextMessage Unicode convert failed in Linux

I'm trying to upload a xml (UTF-8) file and post it on a Jboss MQ. When reading the file from the listener UTF-8 characters are not correctly formatted ONLY in the Jboss (jboss-5.1.0.GA-3) instance running on Linux.
For an instance: BORÅS is converted to BOR¿S at Linux jboss instance.
When I copy and configure the same jboss instance to run at Windows (SP3) it works perfectly.
Also I have change the default setting in Linux by including JAVA_OPTS=-Dfile.encoding=UTF-8 in .bashrc and run.sh files.
inside the Listener JbossTextMessage.getText() is coming with incorrectly specified character.
Any suggestions or workarounds ?
Finally I was able to find a solution, BUT the solution is still a blackbox. If anyone have the answer to WHY it has failed/successful please update the thread.
Solution at a glance :
1. Captured the file contents as a byte arry and wrote it to a xml file in jboss tmp folder using FileOutputStream
When posting to the jboss Message queue, I used the explicitly wrote xml file (1st step) using a FileInputStream as a byte array and pass it as the Message body.
Code example:
View: JSP page with a FormFile
Controller Class :UploadAction.java
public ActionForward execute(ActionMapping mapping, ActionForm form, HttpServletRequest request, HttpServletResponse response){
...........
writeInitFile(theForm.getFile().getFileData()); // Obtain the uploaded file
Message msg = messageHelper.createMessage( readInitFile() ); // messageHelper is a customized factory method to create Message objects. Passing the newly
wrote file's byte array.
messageHelper.sendMsg(msg); // posting in the queue
...........
}
private void writeInitFile(byte[] fileData) throws Exception{
File someFile = new File("/jboss-5.1.0.GA-3/test/server/default/tmp/UploadTmp.xml"); // Write the uploaded file into a temporary file in jboss/tmp folder
FileOutputStream fos = new FileOutputStream(someFile);
fos.write( fileData );
fos.flush();
fos.close();
}
private byte[] readInitFile() throws Exception{
StringBuilder buyteArray=new StringBuilder();
File someFile = new File("/jboss-5.1.0.GA-3/test/server/default/tmp/UploadTmp.xml"); // Read the Newly created file in jboss/tmp folder
FileInputStream fstream = new FileInputStream(someFile);
int ch;
while( (ch = fstream.read()) != -1){
buyteArray.append((char)ch);
}
fstream.close();
return buyteArray.toString().getBytes(); // return the byte []
}
Foot Note: I think it is something to do with the Linux/Windows default file saving type. eg: Windows default : ANSI.