Android Bitmap image upload to mysql database as a blob data type - mysqli

Bimap image not uplaod at server. in database image table have a blob data type attribute. i want to save bitmap image in database
class savebitmap extends AsyncTask<String, Void, String> {
#Override
protected String doInBackground(String... params) {
try {
// Save the image to the SD card.
//File file = new File(Environment.getExternalStorageDirectory(),
//System.currentTimeMillis() + ".png");
//FileOutputStream stream = new FileOutputStream(file);
//bitmap.compress(CompressFormat.PNG, 100, stream);
//convert to byte
ByteArrayOutputStream bytedata = new ByteArrayOutputStream();
bitmap.compress(CompressFormat.JPEG, 100, bytedata);
byte[] data = bytedata.toByteArray();
String imagedata = Base64.encodeToString(data, Base64.DEFAULT);
String name="prescription";
//save image to mysql
httpclient=new DefaultHttpClient();
httppost= new HttpPost("http://10.0.2.2/android/image.php");
nameValuePairs = new ArrayList<NameValuePair>();
nameValuePairs.add(new BasicNameValuePair("name",name));
nameValuePairs.add(new BasicNameValuePair("image",imagedata));
httppost.setEntity(new UrlEncodedFormEntity(nameValuePairs));
response=httpclient.execute(httppost);
HttpEntity entity = response.getEntity();
InputStream is = entity.getContent();
Log.e("Connection", "connection success ");
Log.e("bitmap", imagedata);
} catch (Exception e) {
Log.e("upload failed", e.toString());
}
return null;
}
}
My php file which recive the data from http request and insert into database
<?php
mysql_connect("localhost","root","") or die (mysql_error());
mysql_select_db("image") or die (mysql_errno());
$base= $_REQUEST['image'];
$name= $_REQUEST['name'];
$buffer = base64_decode($base);
$buffer = mysql_real_escape_string($buffer);
$flag['code']=0;
if($q=mysql_query("INSERT INTO image ('name','image')
VALUES ('$name',$buffer')"))
{
$flag['code']=1;
}
print(json_encode($flag));
mysql_close();
mysql_close();
?>
Logcat
09-12 19:07:43.486: D/dalvikvm(816): GC_FOR_ALLOC freed <1K, 5% free 6131K/6388K, paused 44ms, total 44ms
09-12 19:07:43.666: D/gralloc_goldfish(816): Emulator without GPU emulation detected.
09-12 19:07:46.946: D/dalvikvm(816): GC_FOR_ALLOC freed 2507K, 43% free 3651K/6388K, paused 68ms, total 69ms
09-12 19:07:46.946: I/dalvikvm-heap(816): Grow heap (frag case) to 4.974MB for 1334416-byte allocation
09-12 19:07:46.986: D/dalvikvm(816): GC_CONCURRENT freed 1K, 23% free 4952K/6388K, paused 7ms+3ms, total 37ms
09-12 19:07:46.986: D/dalvikvm(816): WAIT_FOR_CONCURRENT_GC blocked 17ms
09-12 19:07:49.766: D/dalvikvm(816): GC_CONCURRENT freed 577K, 20% free 5124K/6388K, paused 4ms+7ms, total 44ms
09-12 19:07:50.946: E/Connection(816): connection success

You have an error in your SQL syntax:
INSERT INTO image ('name','image') VALUES ('$name',$buffer')
Note that you're missing an opening quote on your second parameter. It should be:
INSERT INTO image ('name','image') VALUES ('$name','$buffer')
^--- right there
You're also trying to close the connection twice for some reason:
mysql_close();
mysql_close();
There's a pretty good chance that would result in an error. Additionally, you're never examining the result of your SQL query or any error condition resulting from it. You check for errors when connecting to the database, but not when interacting with it. So there's a very good chance that the database is telling you exactly what's wrong and you're simply ignoring it. When something isn't behaving as expected, it's a good idea to take a look at the errors first.
Also, and this is important, your code is wide open to SQL injection attacks. You'll want to start by reading this. It will help you understand what a SQL injection attack is and how to protect your code from one. In short, you are executing user input as if it were code. This allows any user to execute any code on your server without your permission, which is clearly a very bad thing.

Related

Transfering large data (>100 MB) over Mirror in Unity

string SerializedFileString contains a serialized file, potentially hundreds of MB in size. The server tries to copy it into the client's local string ClientSideSerializedFileString. Too good to be true, which is why it throws an exception. Is there a Mirror-way to do this?
[TargetRpc]
private void TargetSendFile(NetworkConnection target, string SerializedFileString)
{
if (!hasAuthority) { return; }
ClientSideSerializedFileString = SerializedFileString;
}
ArgumentException The output byte buffer is too small to contain the encoded data, encoding 'Unicode (UTF-8)' fallback 'System.Text.EncoderExceptionFallback'.

Flutter Dart Dio Get Request is so slow

I am hosting a space in digital ocean - it is basically Amazon S3 equalivant of digital ocean. My problem with dio is, I am making a get request with dio to a file of 10MB size. The request takes around 9 seconds on my phone but 3 seconds in my browser. I also had this issue in my custom backend. Get requests made with dio (which uses http module of dart) seems to be extremely slow. I need to solve this issue as I need to transfer 50MB of data to user from time to time. Why is dio acting slow on GET requests?
I suspect this might be the underlying cause check here
await Dio().get(
"Remote_Url_I_can_not_share",
onReceiveProgress: (int downloaded, int total) {
listener
.call((downloaded.toDouble() / total.toDouble() * metadataPerc));
},
cancelToken: _cancelToken,
).catchError((err) => throw err);
I believe that reason for this; buffer size is limited to 8KB somewhere underlying.
I tried whole day to increase it. Still no success. Let me share my experience with that buffer size.
Imagine you're downloading a file which is 16 mb.
Please consider that remote server has also higher speed than your download speed. (I mean just forget about server load, etc.. )
If buffersize:
128 bytes, downloading 16mb file takes : 10.820 seconds
1024 bytes, downloading 16mb file takes : 6.276 seconds
8192 bytes, downloading 16mb file takes : 4.776 seconds
16384 bytes, downloading 16mb file takes : 3.759 seconds
32768 bytes, downloading 16mb file takes : 2.956 seconds
------- After This, If we increase chunk size, download time is also increasing
65536 bytes, downloading 16mb file takes : 4.186 seconds
131072 bytes, downloading 16mb file takes : 5.250 seconds
524288 bytes, downloading 16mb file takes : 7.460 seconds
So somehow, if you can set that buffersize 16k or 32k rather than 8k, I believe download speed will increase.
Please feel free to test your results (I got 3 tries and got average of them for the timings)
package dltest;
import java.io.InputStream;
import java.net.URL;
import java.net.URLConnection;
public class DLTest
{
public static void main(String[] args) throws Exception
{
String filePath = "http://hcmaslov.d-real.sci-nnov.ru/public/mp3/Metallica/Metallica%20'...And%20Justice%20For%20All'.mp3";
URL url = new URL(filePath);
URLConnection uc = url.openConnection();
InputStream is = uc.getInputStream();
long start = System.currentTimeMillis();
int downloaded = 0;
int total = uc.getContentLength();
int partialRead = 0;
// byte chunk[] = new byte[128];
// byte chunk[] = new byte[1024];
// byte chunk[] = new byte[4096];
// byte chunk[] = new byte[8192];
byte chunk[] = new byte[16384];
// byte chunk[] = new byte[32768];
// byte chunk[] = new byte[524288];
while ( (partialRead = is.read(chunk)) != -1)
{
// Print If You Like..
}
is.close();
long end = System.currentTimeMillis();
System.out.println("Chunk Size ["+(chunk.length)+"] Time To Complete : "+(end - start));
}
}
My experience with DigitalOcean Spaces has been a very fluctuating one. DO Spaces is, in my opinion, not production ready. I was using their CDN feature for a website, and sometimes the response times would be about 20ms, but sometimes they would exceed 6 seconds. This was in AMS3 datacenter region.
Can you confirm this happens with other S3/servers as well? Such as gstatic, or Amazon CloudFront CDN?
This fluctuating behaviour happened constantly, which is why we transferred all our assets to Amazon S3 + CloudFront. It provides much more consistent results.
It could be that the phone you are testing on uses a very unoptimized traceroute to the DigitalOcean datacenters. That's why you should try different servers.

MongoDB Batch read implementation issue with change stream replica set

Issue:
A inference generating process is writing around 300 inference data's to a MongoDB collection per second. Change stream feature of MongoDB is utilized by another process to read back back these inferences and do the post-processing. Currently, only a single inference data is returned when the change stream function API (mongoc_change_stream_next())is called. So, a total of 300 such calls is required to get all inference data stored within 1 second. However, after each read, around 50ms of time is required to perform the post-processing for single/multiple inference data. Because of the single data return model, an effective latency of 15x is introduced. To tackle this issue, we are trying to implement a batch read mechanism in-line with change stream feature of MongoDB. We tried various options to implement the same, but still getting only one data after each change stream API call. Is there any way to sort out this issue?
Platform:
OS: Ubuntu 16.04
Mongo-c-driver: 1.15.1
Mongo server : 4.0.12
Options tried out:
Setting the batch size of the cursor to more than 1.
int main(void) {
const char *uri_string = "mongodb://localhost:27017/replicaSet=set0";
mongoc_change_stream_t *stream;
mongoc_collection_t *coll;
bson_error_t error;
mongoc_uri_t *uri;
mongoc_client_t *client;
/*
* Add the Mongo DB blocking read and scall the inference parse function with the Json
* */
uri = mongoc_uri_new_with_error (uri_string, &error);
if (!uri) {
fprintf (stderr,
"failed to parse URI: %s\n"
"error message: %s\n",
uri_string,
error.message);
return -1;
}
client = mongoc_client_new_from_uri (uri);
if (!client) {
return -1;
}
coll = mongoc_client_get_collection (client, <DB-NAME>, <collection-name>);
stream = mongoc_collection_watch (coll, &empty, NULL);
mongoc_cursor_set_batch_size(stream->cursor, 20);
while (1){
while (mongoc_change_stream_next (stream, &doc)) {
char *as_json = bson_as_relaxed_extended_json (doc, NULL);
............
............
//post processing consuming 50 ms of time
............
............
}
if (mongoc_change_stream_error_document (stream, &error, &err_doc)) {
if (!bson_empty (err_doc)) {
fprintf (stderr,
"Server Error: %s\n",
bson_as_relaxed_extended_json (err_doc, NULL));
} else {
fprintf (stderr, "Client Error: %s\n", error.message);
}
break;
}
}
return 0;
}
Currently, only a single inference data is returned when the change
stream function API (mongoc_change_stream_next())is called
Technically it's not that a single document is returned. This is because mongoc_change_stream_next() iterates the underlying cursor, setting each bson to the next document. So, even the batch size returned is more than one, it still has to iterate per document.
You could try:
Create separate threads to process the documents in parallel, so you don't have to wait 50ms per document or 15 seconds accumulatively.
Loop through a batch of documents, i.e. 50 cache them then perform a batch processing
Batch process them on separate threads (combination of the two above)

Spymemcache- Memcache/Membase Faileover

Platform: 64 Bit windows OS, spymemcached-2.7.3.jar, J2EE
We want to use two memcache/membase servers for caching solution. We want to allocate 1 GB memory to each memcache/membase server so total we can cache 2 GB data.
We are using spymemcached java client for setting and getting data from memcache. We are not using any replication between two membase servers.
We loading memcacheClient object at the time of our J2EE application startup.
URI server1 = new URI("http://192.168.100.111:8091/pools");
URI server2 = new URI("http://127.0.0.1:8091/pools");
ArrayList<URI> serverList = new ArrayList<URI>();
serverList.add(server1);
serverList.add(server2);
client = new MemcachedClient(serverList, "default", "");
After that we are using memcacheClient to get and set value in memcache/membase server.
Object obj = client.get("spoon");
client.set("spoon", 50, "Hello World!");
Looks like memcacheClient is setting and getting and value only from server1.
If we stop server1, it fails to get/set value. Should it not use server2 in case of server1 down? Please let me know if we are doing anything wrong here...
aspymemcached java client dos not handle membase failover for particular node.
Ref : https://blog.serverdensity.com/handling-memcached-failover/
We need to handle it manually(by our code)
We can do this by using ConnectionObserver
Here is my code :
public static void main(String a[]) throws InterruptedException{
try {
URI server1 = new URI("http://192.168.100.111:8091/pools");
URI server2 = new URI("http://127.0.0.1:8091/pools");
final ArrayList<URI> serverList = new ArrayList<URI>();
serverList.add(server1);
serverList.add(server2);
final MemcachedClient client = new MemcachedClient(serverList, "bucketName", "");
client.addObserver(new ConnectionObserver() {
#Override
public void connectionLost(SocketAddress arg0) {
//method call when connection lost
for(MemcachedNode node : client.getNodeLocator().getAll()){
if(!node.isActive()){
client.shutdown();
//re init your client here, and after re-init it will connect to your secodry node
break;
}
}
}
#Override
public void connectionEstablished(SocketAddress arg0, int arg1) {
//method call when connection established
}
});
Object obj = client.get("spoon");
client.set("spoon", 50, "Hello World!");
} catch (Exception e) {
}
}
client.get() would use first available node and therefore your value would be stored/updated on one node only.
You seems to be a bit contradicting in your requirements - first you're saying that 'we want to allocate 1 GB memory to each memcache/membase server so total we can cache 2 GB data' which implies distributed cache model (particular key is stored on one node in the cache farm) and then you expect to fetch it if that node is down, which obviously won't happen.
If you need your cache farm to survive node failure without losing data cached on that node you should use replication, which is available in MemBase but obviously you would pay the price of storing the same values multiple times so your desire of '1GB per server...total 2GB of cache' won't be possible.

Android InputStream

I am learning android but I can't get past the InputStream.read().
This is just a socket test - the server sends back two bytes when it receives a connection and I know that this working fine. All I want to do is read these values. The b = data.read reads both values in turn but then hangs, it never returns the -1 value which is what expect it to. Also it does not throw an exception.
Any ideas?
Thanks.
protected void startLongRunningOperation() {
// Fire off a thread to do some work that we shouldn't do directly in the UI thread
Thread t = new Thread() {
public void run() {
try {
Log.d("Socket", "try connect ");
Socket sock = new Socket("192.168.0.12", 5001);
Log.d("socket", "connected");
InputStream data = sock.getInputStream();
int b = 0;
while (b != -1) {
b = data.read();
}
data.close();
} catch (Exception e) {
Log.d("Socket", e.toString());
}
}
};
t.start();
}
Reaching the end of the stream is a special state. It doesn't happen just because there is nothing left to read. If the stream is still open, but there's nothing to be read, it will "hang" (or block) as you've noticed until a byte comes across.
To do what you want, the server either needs to close/end the stream, or you need to use:
while (data.available() > 0) {
..
When the number of available bytes is zero, there's nothing sitting in the stream buffer to be read.
On the other hand, if you know that there should only ever be two bytes to read, and that's the end of your data, then just read the two bytes and move on (i.e. don't use a while loop). The reason to use a while loop here would only be if you weren't sure how many total bytes to expect.