string SerializedFileString contains a serialized file, potentially hundreds of MB in size. The server tries to copy it into the client's local string ClientSideSerializedFileString. Too good to be true, which is why it throws an exception. Is there a Mirror-way to do this?
[TargetRpc]
private void TargetSendFile(NetworkConnection target, string SerializedFileString)
{
if (!hasAuthority) { return; }
ClientSideSerializedFileString = SerializedFileString;
}
ArgumentException The output byte buffer is too small to contain the encoded data, encoding 'Unicode (UTF-8)' fallback 'System.Text.EncoderExceptionFallback'.
Related
I'm using haddop-connectors
project for writing BLOBs to Google Cloud Storage.
I'd like to make sure that a BLOB with a specific target name that is being written in a concurrent context is either written in FULL or not appearing at all as visible in case that an exception has occurred while writing.
In the code below, in case that that an I/O exception occurs, the BLOB written will appear on GCS because the stream is being closed in finally:
val stream = fs.create(path, overwrite)
try {
actions.map(_ + "\n").map(_.getBytes(UTF_8)).foreach(stream.write)
} finally {
stream.close()
}
The other possibility would be to not close the stream and let it "leak" so that the BLOB does not get created. However this is not really a valid option.
val stream = fs.create(path, overwrite)
actions.map(_ + "\n").map(_.getBytes(UTF_8)).foreach(stream.write)
stream.close()
Can anybody share with me a recipe on how to write to GCS a BLOB either with hadoop-connectors or cloud storage client in an atomic fashion?
I have used reflection within hadoop-connectors to retrieve an instance of com.google.api.services.storage.Storage from the GoogleHadoopFileSystem instance
GoogleCloudStorage googleCloudStorage = ghfs.getGcsFs().getGcs();
Field gcsField = googleCloudStorage.getClass().getDeclaredField("gcs");
gcsField.setAccessible(true);
Storage gcs = (Storage) gcsField.get(googleCloudStorage);
in order to have the ability to make a call based on an input stream corresponding to the data in memory.
private static StorageObject createBlob(URI blobPath, byte[] content, GoogleHadoopFileSystem ghfs, Storage gcs)
throws IOException
{
CreateFileOptions createFileOptions = new CreateFileOptions(false);
CreateObjectOptions createObjectOptions = objectOptionsFromFileOptions(createFileOptions);
PathCodec pathCodec = ghfs.getGcsFs().getOptions().getPathCodec();
StorageResourceId storageResourceId = pathCodec.validatePathAndGetId(blobPath, false);
StorageObject object =
new StorageObject()
.setContentEncoding(createObjectOptions.getContentEncoding())
.setMetadata(encodeMetadata(createObjectOptions.getMetadata()))
.setName(storageResourceId.getObjectName());
InputStream inputStream = new ByteArrayInputStream(content, 0, content.length);
Storage.Objects.Insert insert = gcs.objects().insert(
storageResourceId.getBucketName(),
object,
new InputStreamContent(createObjectOptions.getContentType(), inputStream));
// The operation succeeds only if there are no live versions of the blob.
insert.setIfGenerationMatch(0L);
insert.getMediaHttpUploader().setDirectUploadEnabled(true);
insert.setName(storageResourceId.getObjectName());
return insert.execute();
}
/**
* Helper for converting from a Map<String, byte[]> metadata map that may be in a
* StorageObject into a Map<String, String> suitable for placement inside a
* GoogleCloudStorageItemInfo.
*/
#VisibleForTesting
static Map<String, String> encodeMetadata(Map<String, byte[]> metadata) {
return Maps.transformValues(metadata, QuickstartParallelApiWriteExample::encodeMetadataValues);
}
// A function to encode metadata map values
private static String encodeMetadataValues(byte[] bytes) {
return bytes == null ? Data.NULL_STRING : BaseEncoding.base64().encode(bytes);
}
Note in the example above, that even if there are multiple callers trying to create a blob with the same name in parallel, ONE and only ONE will succeed in creating the blob. The other callers will receive 412 Precondition Failed.
GCS objects (blobs) are immutable 1, which means they can be created, deleted or replaced, but not appended.
The Hadoop GCS connector provides the HCFS interface which gives the illusion of appendable files. But under the hood, it is just one blob creation, GCS doesn't know if the content is complete or not from the application's perspective, just as you mentioned in the example. There is no way to cancel a file creation.
There are 2 options you can consider:
Create a temp blob/file, copy it to the final blob/file, then delete the temp blob/file, see 2. Note that there is no atomic rename operation in GCS, rename is implemented as copy-then-delete.
If your data fits into the memory, first read up the stream and buffer the bytes in memory, then create the blob/file, see 3.
GCS connector should also work with the 2 options above, but I think GCS client library gives you more control.
I am new to VertX and I want to read a pdf using the "GET" method. I know that buffer will be used. But there are no resources on the internet on how to do that.
Omitting the details of how you would get the file from your data store (couchbase DB), it is fair to assume the data is read correctly into a byte[].
Once the data is read, you can feed it to an io.vertx.core.buffer.Buffer that can be used to shuffle data to the HttpServerResponse as follows:
public void sendPDFFile(byte[] fileBytes, HttpServerResponse response) {
Buffer buffer = Buffer.buffer(fileBytes);
response.putHeader("Content-Type", "application/pdf")
.putHeader("Content-Length", String.valueOf(buffer.length()))
.setStatusCode(200)
.end(buffer);
}
I am hosting a space in digital ocean - it is basically Amazon S3 equalivant of digital ocean. My problem with dio is, I am making a get request with dio to a file of 10MB size. The request takes around 9 seconds on my phone but 3 seconds in my browser. I also had this issue in my custom backend. Get requests made with dio (which uses http module of dart) seems to be extremely slow. I need to solve this issue as I need to transfer 50MB of data to user from time to time. Why is dio acting slow on GET requests?
I suspect this might be the underlying cause check here
await Dio().get(
"Remote_Url_I_can_not_share",
onReceiveProgress: (int downloaded, int total) {
listener
.call((downloaded.toDouble() / total.toDouble() * metadataPerc));
},
cancelToken: _cancelToken,
).catchError((err) => throw err);
I believe that reason for this; buffer size is limited to 8KB somewhere underlying.
I tried whole day to increase it. Still no success. Let me share my experience with that buffer size.
Imagine you're downloading a file which is 16 mb.
Please consider that remote server has also higher speed than your download speed. (I mean just forget about server load, etc.. )
If buffersize:
128 bytes, downloading 16mb file takes : 10.820 seconds
1024 bytes, downloading 16mb file takes : 6.276 seconds
8192 bytes, downloading 16mb file takes : 4.776 seconds
16384 bytes, downloading 16mb file takes : 3.759 seconds
32768 bytes, downloading 16mb file takes : 2.956 seconds
------- After This, If we increase chunk size, download time is also increasing
65536 bytes, downloading 16mb file takes : 4.186 seconds
131072 bytes, downloading 16mb file takes : 5.250 seconds
524288 bytes, downloading 16mb file takes : 7.460 seconds
So somehow, if you can set that buffersize 16k or 32k rather than 8k, I believe download speed will increase.
Please feel free to test your results (I got 3 tries and got average of them for the timings)
package dltest;
import java.io.InputStream;
import java.net.URL;
import java.net.URLConnection;
public class DLTest
{
public static void main(String[] args) throws Exception
{
String filePath = "http://hcmaslov.d-real.sci-nnov.ru/public/mp3/Metallica/Metallica%20'...And%20Justice%20For%20All'.mp3";
URL url = new URL(filePath);
URLConnection uc = url.openConnection();
InputStream is = uc.getInputStream();
long start = System.currentTimeMillis();
int downloaded = 0;
int total = uc.getContentLength();
int partialRead = 0;
// byte chunk[] = new byte[128];
// byte chunk[] = new byte[1024];
// byte chunk[] = new byte[4096];
// byte chunk[] = new byte[8192];
byte chunk[] = new byte[16384];
// byte chunk[] = new byte[32768];
// byte chunk[] = new byte[524288];
while ( (partialRead = is.read(chunk)) != -1)
{
// Print If You Like..
}
is.close();
long end = System.currentTimeMillis();
System.out.println("Chunk Size ["+(chunk.length)+"] Time To Complete : "+(end - start));
}
}
My experience with DigitalOcean Spaces has been a very fluctuating one. DO Spaces is, in my opinion, not production ready. I was using their CDN feature for a website, and sometimes the response times would be about 20ms, but sometimes they would exceed 6 seconds. This was in AMS3 datacenter region.
Can you confirm this happens with other S3/servers as well? Such as gstatic, or Amazon CloudFront CDN?
This fluctuating behaviour happened constantly, which is why we transferred all our assets to Amazon S3 + CloudFront. It provides much more consistent results.
It could be that the phone you are testing on uses a very unoptimized traceroute to the DigitalOcean datacenters. That's why you should try different servers.
I've tried to upload a song (.mp4) file format to media services. it has uploaded successfully but, when i tried to create encoding job then i'm getting the below mentioned error. For few files i'm getting the below error and for few files it is not. unable to identify what is the error & how to resolve this problem?
Error Msg:
Encoding task
ErrorProcessingTask : An error has occurred. Stage: ApplyEncodeCommand. Code: System.IO.InvalidDataException.
System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. --->
System.IO.InvalidDataException: Bad input: the source video has an avg_frame_rate of NaN fps and r_frame_rate of 90000 fps.
Code: using encoding of "H264 Multiple Bitrate 720p"
static public IAsset CreateEncodingJob(IAsset asset, string preset,string fileName)
{
IJob job = _context.Jobs.Create(preset + " encoding job");
var mediaProcessors =
_context.MediaProcessors.Where(p => p.Name.Contains("Media Encoder Standard")).ToList();
var latestMediaProcessor =
mediaProcessors.OrderBy(mp => new Version(mp.Version)).LastOrDefault();
ITask task = job.Tasks.AddNew(preset + " encoding task",
latestMediaProcessor,
preset,
Microsoft.WindowsAzure.MediaServices.Client.TaskOptions.ProtectedConfiguration);
task.InputAssets.Add(asset);
task.OutputAssets.AddNew(fileName + " " + preset,
AssetCreationOptions.None);
job.StateChanged += new
EventHandler<JobStateChangedEventArgs>(StateChanged);
job.Submit();
LogJobDetails(job.Id);
Task progressJobTask = job.GetExecutionProgressTask(CancellationToken.None);
progressJobTask.Wait();
if (job.State == JobState.Error)
{
throw new Exception("\nExiting method due to job error.");
}
return job.OutputMediaAssets[0];
}
Can any one help me on this?
Found the solution: Click here
Re posting the comment:
Your encode tasks are failing because the nominal frame rate reported by the input video is either too high or too low. You will have to override the output frame rate setting in the encoding preset. Suppose you know that the input videos have been recorded at 30 frames/second, then:
Take the JSON for "H264 Multiple Bitrate 720p" from https://msdn.microsoft.com/en-us/library/azure/mt269953.aspx
Edit/replace each "FrameRate": "0/1" entry with "FrameRate": "30/1". Note that there are multiple entries to be replaced.
Save the resultant JSON
When submitting an encode Task, in CreateEncodingTask, replace the string "preset" with the entire JSON (by using System.IO.File.ReadAllText("song.Json"))
Regards,
Dilip.
I'm trying to upload a xml (UTF-8) file and post it on a Jboss MQ. When reading the file from the listener UTF-8 characters are not correctly formatted ONLY in the Jboss (jboss-5.1.0.GA-3) instance running on Linux.
For an instance: BORÅS is converted to BOR¿S at Linux jboss instance.
When I copy and configure the same jboss instance to run at Windows (SP3) it works perfectly.
Also I have change the default setting in Linux by including JAVA_OPTS=-Dfile.encoding=UTF-8 in .bashrc and run.sh files.
inside the Listener JbossTextMessage.getText() is coming with incorrectly specified character.
Any suggestions or workarounds ?
Finally I was able to find a solution, BUT the solution is still a blackbox. If anyone have the answer to WHY it has failed/successful please update the thread.
Solution at a glance :
1. Captured the file contents as a byte arry and wrote it to a xml file in jboss tmp folder using FileOutputStream
When posting to the jboss Message queue, I used the explicitly wrote xml file (1st step) using a FileInputStream as a byte array and pass it as the Message body.
Code example:
View: JSP page with a FormFile
Controller Class :UploadAction.java
public ActionForward execute(ActionMapping mapping, ActionForm form, HttpServletRequest request, HttpServletResponse response){
...........
writeInitFile(theForm.getFile().getFileData()); // Obtain the uploaded file
Message msg = messageHelper.createMessage( readInitFile() ); // messageHelper is a customized factory method to create Message objects. Passing the newly
wrote file's byte array.
messageHelper.sendMsg(msg); // posting in the queue
...........
}
private void writeInitFile(byte[] fileData) throws Exception{
File someFile = new File("/jboss-5.1.0.GA-3/test/server/default/tmp/UploadTmp.xml"); // Write the uploaded file into a temporary file in jboss/tmp folder
FileOutputStream fos = new FileOutputStream(someFile);
fos.write( fileData );
fos.flush();
fos.close();
}
private byte[] readInitFile() throws Exception{
StringBuilder buyteArray=new StringBuilder();
File someFile = new File("/jboss-5.1.0.GA-3/test/server/default/tmp/UploadTmp.xml"); // Read the Newly created file in jboss/tmp folder
FileInputStream fstream = new FileInputStream(someFile);
int ch;
while( (ch = fstream.read()) != -1){
buyteArray.append((char)ch);
}
fstream.close();
return buyteArray.toString().getBytes(); // return the byte []
}
Foot Note: I think it is something to do with the Linux/Windows default file saving type. eg: Windows default : ANSI.