I have the following data:
Time series data stored in Influxdb with a tag "State" (CA, MA, IL, FL, etc.) and a field "value" (numeric).
I am now able to get data in this form:
Time
CA
MA
IL
FL
7/15
**
**
**
**
7/16
**
**
**
**
7/17
**
**
**
**
How can I switch columns and rows in Grafana or Influxdb?
State
7/15
7/16
7/17
CA
***
***
***
MA
***
***
***
IL
***
***
***
FL
***
***
***
The data is taken from an influxdb to a Grafana dashboard.
Related
0 [main] ssh 1927 D:\Program Files\Git\usr\bin\ssh.exe: *** fatal error -
console device allocation failure - too many consoles in use, max consoles is 32
enter image description here
In our google play developer console, for an application, we are seeing very high number of crashes in Huawei Devices (Android 8.0 and Android 9.0).
We are developing application using Unity 2019.4.8f1. The application contains:
-Firebase
-Flurry
-Admob
The reason for crashes mentioned is "java.lang.Error(no location available)".
The crash report for it is as follows:
java.lang.Error: *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
Version '2019.4.8f1 (60781d942082)', Build type 'Release', Scripting Backend 'il2cpp', CPU 'armeabi-v7a'
Build fingerprint: 'HUAWEI/MRD-L41A/HWMRD-M1:9/HUAWEIMRD-LX1F/9.1.0.333C185:user/release-keys'
Revision: '0'
ABI: 'arm'
Timestamp: 2020-09-22 07:52:03+0300
pid: 10152, tid: 10517, name: Thread-96 >>> com.iz.bible.word.puzzle.games.connect.collect.verses <<<
uid: 10198
signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x2a
Cause: null pointer dereference
r0 00000000 r1 ca712274 r2 b47ce538 r3 b47ce624
r4 b47ce624 r5 00000000 r6 00000000 r7 b47ce538
r8 00000003 r9 c3ada800 r10 00000000 r11 b47ce6ec
ip cb665ea4 sp b47ce4e8 lr cd11dd3f pc ca74614c
backtrace:
at .
at libil2cpp.0x2bb14c (Native Method)
How do we solve this crash? Our downloads are adversely affected because of this crash.
Thanks!
I'm figuring out a really strange problem. My app is crashing just on iPhone X and just on the device, not on the simulator. Keeping a look to the debug instrument it seems that the RAM is reaching it's maximum and it's crashing.
On the debug area it's appearing:
ZTL City(6387,0x16c707000) malloc: *** mach_vm_map(size=9437184) failed (error code=3)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
ZTL City(6387,0x16c81f000) malloc: *** mach_vm_map(size=9437184) failed (error code=3)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
ZTL City(6387,0x16c2eb000) malloc: *** mach_vm_map(size=9437184) failed (error code=3)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
ZTL City(6387,0x16c707000) malloc: *** mach_vm_map(size=9437184) failed (error code=3)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
ZTL City(6387,0x16c377000) malloc: *** mach_vm_map(size=9437184) failed (error code=3)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
ZTL City(6387,0x16c793000) malloc: *** mach_vm_map(size=9437184) failed (error code=3)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
ZTL City(6387,0x16c81f000) malloc: *** mach_vm_map(size=9437184) failed (error code=3)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
Someone knows why? It's happening only on iPhone X and straight on the device when FlyOver is enabled from the Map tab.
Thank you!
Can anyone say are if there is any practical limit for the number of databases in mongodb? I've started to have serious problems when I passed 120 databases. Simple things like :
> show dbs
Mon Feb 10 16:35:32 DBClientCursor::init call() failed
Mon Feb 10 16:35:32 query failed : admin.$cmd { listDatabases: 1.0 } to: 127.0.0.1:27017
Mon Feb 10 16:35:32 Error: error doing query: failed src/mongo/shell/collection.js:155
Mon Feb 10 16:35:32 trying reconnect to 127.0.0.1:27017
Mon Feb 10 16:35:32 reconnect 127.0.0.1:27017 failed couldn't connect to server 127.0.0.1:27017
>
Mon Feb 10 16:36:01 trying reconnect to 127.0.0.1:27017
Mon Feb 10 16:36:01 reconnect 127.0.0.1:27017 failed couldn't connect to server 127.0.0.1:27017
>
Mon Feb 10 16:37:01 trying reconnect to 127.0.0.1:27017
Mon Feb 10 16:37:01 reconnect 127.0.0.1:27017 ok
and
> getMemInfo()
{ "virtual" : 32, "resident" : 7 }
Mon Feb 10 16:39:00 DBClientCursor::init call() failed
Mon Feb 10 16:39:00 query failed : admin.$cmd { replSetGetStatus: 1.0, forShell: 1.0 } to: 127.0.0.1:27017
> shell
Mon Feb 10 16:39:38 ReferenceError: shell is not defined (shell):1
Mon Feb 10 16:39:38 trying reconnect to 127.0.0.1:27017
Mon Feb 10 16:39:38 reconnect 127.0.0.1:27017 ok
Yet the log file stayed enigmatic
What version of mongodb are you running on what host?
Here is a test on CenOS 6.5, mongodb 2.2 x86_64 direct from EPEL
Here is a sample python script that creates 1000 databases
from pymongo import MongoClient
mc = MongoClient()
for i in range(5000):
print i
mc['db%s'%(i)].test.insert({"test":True})
output:
...snip...
506
Traceback (most recent call last):
File "overload_mongo.py", line 6, in <module>
mc['db%s'%(i)].test.insert({"test":True})
File "/usr/lib64/python2.6/site-packages/pymongo/collection.py", line 357, in insert
continue_on_error, self.__uuid_subtype), safe)
File "/usr/lib64/python2.6/site-packages/pymongo/mongo_client.py", line 929, in _send_message
raise AutoReconnect(str(e))
pymongo.errors.AutoReconnect: [Errno 104] Connection reset by peer
There it is, looking at the log
ERROR: Uncaught std::exception: boost::filesystem::basic_directory_iterator constructor: Too many open files: "/index/bauman/db/_tmp/esort.1392056635.506/", terminating
The good ole too many open files problem
If you are on a enterprise linux platform, you can drop this file into /etc/security/limits.d/mongodb.conf and start a new session
mongodb hard nofile 99999
mongodb soft nofile 99999
mongodb hard nproc 99999
mongodb soft nproc 99999
I dont know how to achieve a similar result on windows.
The 'problem' lies in that MongoDB wants to memory map every single database file, so you need your hostOS to allow it to do so.
Same code as above
python overload_mongo.py
Output
...snip...
995
996
997
998
999
All better
I have successfully compiled pgagent from source on a CentOS 6.2 server.
When I try to launch pgagent with the following command :
/usr/bin/pgagent -l 2 hostaddr=serveur.com dbname=postgres user=postgres
I get the following error messages :
DEBUG: user : postgres
DEBUG: port : 0
DEBUG: host : server.com
DEBUG: dbname : postgres
DEBUG: password :
DEBUG: conn timeout : 0
DEBUG: Connection Information:
DEBUG: user : postgres
DEBUG: port : 0
DEBUG: host : server.com
DEBUG: dbname : postgres
DEBUG: password :
DEBUG: conn timeout : 0
DEBUG: Creating DB connection: user=postgres host=server.com dbname=postgres
DEBUG: Database sanity check
DEBUG: Clearing zombies
WARNING: Query error: ERROR: could not extend file "base/12870/12615": No space left on device
HINT: Check free disk space.
WARNING: Query error: ERROR: relation "pga_tmp_zombies" does not exist
LINE 1: INSERT INTO pga_tmp_zombies (jagpid) SELECT jagpid FROM pg...
^
WARNING: Query error: ERROR: table "pga_tmp_zombies" does not exist
WARNING: Query error: ERROR: could not extend file "base/12870/17167": No space left on device
HINT: Check free disk space.
WARNING: Couldn't create the primary connection (attempt 1): ERROR: could not extend file "base/12870/17167": No space left on device
HINT: Check free disk space.
DEBUG: Clearing all connections
DEBUG: Connection stats: total - 1, free - 0, deleted - 1
Any idea of the source of the problem ?
WARNING: Query error: ERROR: could not extend file "base/12870/12615": No space left on device
HINT: Check free disk space.
You're out of disk space, or (rather unlikely) a disk access quota limit has been hit for the database user. It's also possible you've run out of inodes on the file system for some platforms and file systems. Check df.