Cinnamon take too long time to start - cinnamon

Is this normal ?
info t=2016-04-28T09:57:34Z Cinnamon.AppSystem.get_default() started in 51020 ms
info t=2016-04-28T09:57:52Z AppletManager.init() started in 13658 ms
info t=2016-04-28T09:57:52Z Cinnamon took 69321 ms to start

Related

Installing Kafka we are not able run config command

2023-02-18 15:09:28,915] INFO Snapshot taken in 3 ms (org.apache.zookeeper.server.ZooKeeperServer)
[2023-02-18 15:09:28,932] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor)
[2023-02-18 15:09:28,932] INFO zookeeper.request_throttler.shutdownTimeout = 10000 (org.apache.zookeeper.server.RequestThrottler)
[2023-02-18 15:09:28,976] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager)
[2023-02-18 15:09:28,977] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider)
I am trying to install Kafka, not able to install and Run

Timeout while streaming messages from message queue

I am processing messages from IBM MQ with a Scala program. It was working fine and stopped working without any code change.
This timeout occurs without a specific pattern and from time to time.
I run the application like this:
spark-submit --conf spark.streaming.driver.writeAheadLog.allowBatching=true --conf spark.streaming.driver.writeAheadLog.batchingTimeout=15000 --class com.ibm.spark.streaming.mq.SparkMQExample --master yarn --deploy-mode client --num-executors 1 $jar_file_loc lots of args here >> script.out.log 2>> script.err.log < /dev/null
I tried two properties:
spark.streaming.driver.writeAheadLog.batchingTimeout 15000
spark.streaming.driver.writeAheadLog.allowBatching true
See error:
2021-12-14 14:13:05 WARN ReceivedBlockTracker:90 - Exception thrown while writing record: BatchAllocationEvent(1639487580000 ms,AllocatedBlocks(Map(0 -> Queue()))) to the WriteAheadLog.
java.util.concurrent.TimeoutException: Futures timed out after [5000 milliseconds]
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:223)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:227)
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:220)
at org.apache.spark.streaming.util.BatchedWriteAheadLog.write(BatchedWriteAheadLog.scala:84)
at org.apache.spark.streaming.scheduler.ReceivedBlockTracker.writeToLog(ReceivedBlockTracker.scala:238)
at org.apache.spark.streaming.scheduler.ReceivedBlockTracker.allocateBlocksToBatch(ReceivedBlockTracker.scala:118)
at org.apache.spark.streaming.scheduler.ReceiverTracker.allocateBlocksToBatch(ReceiverTracker.scala:209)
at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$3.apply(JobGenerator.scala:248)
at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$3.apply(JobGenerator.scala:247)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.streaming.scheduler.JobGenerator.generateJobs(JobGenerator.scala:247)
at org.apache.spark.streaming.scheduler.JobGenerator.org$apache$spark$streaming$scheduler$JobGenerator$$processEvent(JobGenerator.scala:183)
at org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:89)
at org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:88)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
2021-12-14 14:13:05 INFO ReceivedBlockTracker:57 - Possibly processed batch 1639487580000 ms needs to be processed again in WAL recovery
2021-12-14 14:13:05 INFO JobScheduler:57 - Added jobs for time 1639487580000 ms
2021-12-14 14:13:05 INFO JobGenerator:57 - Checkpointing graph for time 1639487580000 ms
2021-12-14 14:13:05 INFO DStreamGraph:57 - Updating checkpoint data for time 1639487580000 ms
rdd is empty
2021-12-14 14:13:05 INFO JobScheduler:57 - Starting job streaming job 1639487580000 ms.0 from job set of time 1639487580000 ms
2021-12-14 14:13:05 INFO DStreamGraph:57 - Updated checkpoint data for time 1639487580000 ms
2021-12-14 14:13:05 INFO JobScheduler:57 - Finished job streaming job 1639487580000 ms.0 from job set of time 1639487580000 ms
2021-12-14 14:13:05 INFO JobScheduler:57 - Total delay: 5.011 s for time 1639487580000 ms (execution: 0.001 s)
2021-12-14 14:13:05 INFO CheckpointWriter:57 - Submitted checkpoint of time 1639487580000 ms to writer queue
2021-12-14 14:13:05 INFO BlockRDD:57 - Removing RDD 284 from persistence list
2021-12-14 14:13:05 INFO PluggableInputDStream:57 - Removing blocks of RDD BlockRDD[284] at receiverStream at JmsStreamUtils.scala:64 of time 1639487580000 ms
2021-12-14 14:13:05 INFO BlockManager:57 - Removing RDD 284
2021-12-14 14:13:05 INFO JobGenerator:57 - Checkpointing graph for time 1639487580000 ms
2021-12-14 14:13:05 INFO DStreamGraph:57 - Updating checkpoint data for time 1639487580000 ms
2021-12-14 14:13:05 INFO DStreamGraph:57 - Updated checkpoint data for time 1639487580000 ms
2021-12-14 14:13:05 INFO CheckpointWriter:57 - Submitted checkpoint of time 1639487580000 ms to writer queue
Any kind of information would be useful. Thank you!

Celery worker not reading pyrogram session file

I'm trying to execute a pyrogram function via a celery task (scheduling etc.)
The function works when ran via shell:
from app_name.users.tasks import establish_session
establish_session()
Sending it to celery via establish_session.delay() is where the problem arises.
The exact same function when executed via celery fails to read the session required session file.
I've confirmed that the session file is seen in both methods and have permissions for os.R_OK, os.W_OK, os.F_OK.
users.tasks
#shared_task
def establish_session():
from utils.telegram import get_new_session
user_bot = get_new_session()
print(user_bot)
utils.telegram
def get_new_session():
import os
cwd = os.getcwd()
print(cwd)
print(os.access('user.session', os.R_OK)) # Check for read access
print(os.access('user.session', os.W_OK)) # Check for write access
print(os.access('user.session', os.X_OK)) # Check for execution access
print(os.access('user.session', os.F_OK)) # Check for existence of file
user_bot = Client("user", api_id=ID, api_hash=HASH)
user_bot.start()
user_bot.stop()
return user_bot
Difference in outputs:
establish_session()
INFO 2021-01-23 18:07:21,379 connection Connecting...
INFO 2021-01-23 18:07:21,382 connection Connected! Production DC5 - IPv4 - TCPAbridgedO
INFO 2021-01-23 18:07:21,383 session NetworkTask started
INFO 2021-01-23 18:07:21,435 msg_id Time synced: 2021-01-23 10:07:21.439058 UTC
INFO 2021-01-23 18:07:21,439 session NextSaltTask started
INFO 2021-01-23 18:07:21,439 session Next salt in 33m 13s (at 2021-01-23 18:40:35)
INFO 2021-01-23 18:07:21,524 session Session initialized: Layer 122
INFO 2021-01-23 18:07:21,524 session Device: CPython 3.8.6 - Pyrogram 1.1.10
INFO 2021-01-23 18:07:21,524 session System: Linux 5.8.0-33-generic (EN)
INFO 2021-01-23 18:07:21,524 session Session started
INFO 2021-01-23 18:07:21,540 session PingTask started
INFO 2021-01-23 18:07:21,620 dispatcher Started 6 HandlerTasks
INFO 2021-01-23 18:07:21,632 syncer Synced "user" in 11.2832 ms
INFO 2021-01-23 18:07:21,639 syncer Synced "user" in 7.18236 ms
INFO 2021-01-23 18:07:21,640 dispatcher Stopped 6 HandlerTasks
INFO 2021-01-23 18:07:21,640 session PingTask stopped
INFO 2021-01-23 18:07:21,640 session NextSaltTask stopped
INFO 2021-01-23 18:07:21,640 connection Disconnected
INFO 2021-01-23 18:07:21,641 session NetworkTask stopped
INFO 2021-01-23 18:07:21,641 session Session stopped
vs
establish_session.delay()
[2021-01-23 18:07:35,832: INFO/ForkPoolWorker-2] Start creating a new auth key on DC2
[2021-01-23 18:07:35,832: INFO/ForkPoolWorker-2] Connecting...
[2021-01-23 18:07:36,105: INFO/ForkPoolWorker-2] Connected! Production DC2 - IPv4 - TCPAbridgedO
[2021-01-23 18:07:37,592: INFO/ForkPoolWorker-2] Done auth key exchange:
[2021-01-23 18:07:37,592: INFO/ForkPoolWorker-2] Disconnected
[2021-01-23 18:07:37,605: WARNING/ForkPoolWorker-2] Pyrogram v1.1.10, Copyright (C) 2017-2021 Dan <https://github.com/delivrance>
[2021-01-23 18:07:37,605: WARNING/ForkPoolWorker-2] Licensed under the terms of the GNU Lesser General Public License v3 or later (LGPLv3+)
[2021-01-23 18:07:37,605: INFO/ForkPoolWorker-2] Connecting...
[2021-01-23 18:07:37,875: INFO/ForkPoolWorker-2] Connected! Production DC2 - IPv4 - TCPAbridgedO
[2021-01-23 18:07:37,875: INFO/ForkPoolWorker-2] NetworkTask started
[2021-01-23 18:07:38,459: INFO/ForkPoolWorker-2] Time synced: 2021-01-23 10:07:38.353224 UTC
[2021-01-23 18:07:38,732: INFO/ForkPoolWorker-2] NextSaltTask started
[2021-01-23 18:07:38,732: INFO/ForkPoolWorker-2] Next salt in 44m 58s (at 2021-01-23 18:52:37)
[2021-01-23 18:07:39,096: INFO/ForkPoolWorker-2] Session initialized: Layer 122
[2021-01-23 18:07:39,096: INFO/ForkPoolWorker-2] Device: CPython 3.8.6 - Pyrogram 1.1.10
[2021-01-23 18:07:39,096: INFO/ForkPoolWorker-2] System: Linux 5.8.0-33-generic (EN)
[2021-01-23 18:07:39,096: INFO/ForkPoolWorker-2] Session started
[2021-01-23 18:07:39,099: WARNING/ForkPoolWorker-2] Enter phone number or bot token:
[2021-01-23 18:07:39,099: INFO/ForkPoolWorker-2] PingTask started
[2021-01-23 18:07:39,100: INFO/ForkPoolWorker-2] PingTask stopped
[2021-01-23 18:07:39,100: INFO/ForkPoolWorker-2] NextSaltTask stopped
[2021-01-23 18:07:39,100: INFO/ForkPoolWorker-2] Disconnected
[2021-01-23 18:07:39,101: INFO/ForkPoolWorker-2] NetworkTask stopped
[2021-01-23 18:07:39,101: INFO/ForkPoolWorker-2] Session stopped
Any assistance is greatly appreciated!
I did a lot of work to get pyrogram working under celery. It's not ideal but it works for my case. Maybe this could help you too.
I'm using the latest version of pyrogram(1.3.5) and celery(5.2.3)
# first need to create a client, save session file in memory
tg_client=Client(":memory:",APP_ID=123,APP_HASH="abc")
# create celery app
app = Celery('tasks', broker=BROKER)
#app.task
def some_task():
print(tg_client.get_me())
# define celery startup
def run_celery():
# pool must be threads
argv = [
"-A", "tasks", 'worker', '--loglevel=info',
"--pool=threads"]
app.worker_main(argv)
if __name__ == '__main__':
tg_client.start() # <-- I think you can also put it in the first line of `run_celery`
threading.Thread(target=run_celery, daemon=True).start()
idle()
celery_client.stop()
Key points are:
need to start celery worker in a different thread than main thread because pyrogram is async library, it relies on main thread while celery is blocking the main thread
celery pool must be threads or solo
Besides that, you can also use with in a task
#app.task
def some_task()
with tg_client():
print(tg_client.get_me())
some references:
https://github.com/pyrogram/pyrogram/issues/480
https://github.com/tgbot-collection/ytdlbot/blob/master/ytdlbot/tasks.py

What does one enter on the command line to run spark in a bokeh serve app? Do I simply separate the two command line entries by &&?

My effort does not work:
/usr/local/spark/spark-2.3.2-bin-hadoop2.7/bin/spark-submit --driver-memory 6g --packages org.apache.spark:spark-streaming-kafka-0-8_2.11:2.3.2 runspark.py && bokeh serve --show bokeh_app
runspark.py contains the instantiation of spark, and bokeh_app is the folder of the bokeh server app. spark is being used to update a streaming dask dataframe.
WHAT HAPPENS:
The spark instance starts running, loads as it normally would without the bokeh server. However as soon as the bokeh server app kicks in (i.e.) the web page opens, the spark instance shuts down. It doesn't send back any errors in the console output.
OUTPUT BELOW:
2018-11-26 21:04:05 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#4f0492c9{/static/sql,null,AVAILABLE,#Spark}
2018-11-26 21:04:06 INFO StateStoreCoordinatorRef:54 - Registered StateStoreCoordinator endpoint
2018-11-26 21:04:06 INFO SparkContext:54 - Invoking stop() from shutdown hook
2018-11-26 21:04:06 INFO AbstractConnector:318 - Stopped Spark#4f3c4272{HTTP/1.1,[http/1.1]}{0.0.0.0:4041}
2018-11-26 21:04:06 INFO SparkUI:54 - Stopped Spark web UI at http://192.168.1.25:4041
2018-11-26 21:04:06 INFO MapOutputTrackerMasterEndpoint:54 - MapOutputTrackerMasterEndpoint stopped!
2018-11-26 21:04:06 INFO MemoryStore:54 - MemoryStore cleared
2018-11-26 21:04:06 INFO BlockManager:54 - BlockManager stopped
2018-11-26 21:04:06 INFO BlockManagerMaster:54 - BlockManagerMaster stopped
2018-11-26 21:04:07 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:54 - OutputCommitCoordinator stopped!
2018-11-26 21:04:07 INFO SparkContext:54 - Successfully stopped SparkContext
2018-11-26 21:04:07 INFO ShutdownHookManager:54 - Shutdown hook called
2018-11-26 21:04:07 INFO ShutdownHookManager:54 - Deleting directory /tmp/spark-c42ce0b3-d49e-48ce-962c-277b42166267
2018-11-26 21:04:07 INFO ShutdownHookManager:54 - Deleting directory /tmp/spark-bd448b2e-6b0f-467a-9e43-689542c42a6f
2018-11-26 21:04:07 INFO ShutdownHookManager:54 - Deleting directory /tmp/spark-bd448b2e-6b0f-467a-9e43-689542c42a6f/pyspark-117d2a10-7cb9-4eb3-b4d0-f92f9046522c
2018-11-26 21:04:08,542 Starting Bokeh server version 0.13.0 (running on Tornado 5.1.1)
2018-11-26 21:04:08,547 Bokeh app running at: http://localhost:5006/aion_analytics
2018-11-26 21:04:08,547 Starting Bokeh server with process id: 10769
Ok, I found the answer. The idea is simply to embed the bokeh server in the pyspark code instead of running the bokeh server from the command line. Use the pyspark submit command as normal.
https://github.com/bokeh/bokeh/blob/1.0.1/examples/howto/server_embed/standalone_embed.py
I did exactly what shown in the link above.

OrientDB & .Net driver: Unable to read data from the transport connection

Getting error while reading network stream from a successful socket connection. PL see the debug log from orient DB:
2016-04-08 18:08:51:590 WARNI Not enough physical memory available for DISKCACHE: 1,977MB (heap=494MB). Set lower Maximum Heap (-Xmx setting on JVM) and restart OrientDB. Now
running with DISKCACHE=256MB [orientechnologies]
2016-04-08 18:08:51:606 INFO OrientDB config DISKCACHE=-566MB (heap=494MB os=1,977MB disk=16,656MB) [orientechnologies]
2016-04-08 18:08:51:809 INFO Loading configuration from: C:/inetpub/wwwroot/orientdb-2.1.5/config/orientdb-server-config.xml... [OServerConfigurationLoaderXml]
2016-04-08 18:08:52:292 INFO OrientDB Server v2.1.5 (build 2.1.x#r${buildNumber}; 2015-10-29 16:54:25+0000) is starting up... [OServer]
2016-04-08 18:08:52:370 INFO Databases directory: C:\inetpub\wwwroot\orientdb-2.1.5\databases [OServer]
2016-04-08 18:08:52:495 INFO Listening binary connections on 127.0.0.1:2424 (protocol v.32, socket=default) [OServerNetworkListener]
2016-04-08 18:08:52:511 INFO Listening http connections on 127.0.0.1:2480 (protocol v.10, socket=default) [OServerNetworkListener]
2016-04-08 18:08:52:573 INFO Installing dynamic plugin 'studio-2.1.zip'... [OServerPluginManager]
2016-04-08 18:08:52:838 INFO Installing GREMLIN language v.2.6.0 - graph.pool.max=50 [OGraphServerHandler]
2016-04-08 18:08:52:838 INFO [OVariableParser.resolveVariables] Error on resolving property: distributed [orientechnologies]
2016-04-08 18:08:52:854 INFO Installing Script interpreter. WARN: authenticated clients can execute any kind of code into the server by using the following allowed languages:
[sql] [OServerSideScriptInterpreter]
2016-04-08 18:08:52:854 INFO OrientDB Server v2.1.5 (build 2.1.x#r${buildNumber}; 2015-10-29 16:54:25+0000) is active. [OServer]
2016-04-08 18:08:57:986 INFO /127.0.0.1:49243 - Connected [OChannelBinaryServer]
2016-04-08 18:08:58:002 INFO /127.0.0.1:49243 - Writing short (2 bytes): 32 [OChannelBinaryServer]
2016-04-08 18:08:58:002 INFO /127.0.0.1:49243 - Flush [OChannelBinaryServer]
2016-04-08 18:08:58:002 INFO /127.0.0.1:49243 - Reading byte (1 byte)... [OChannelBinaryServer]
Using OrientDB .Net binary (C# driver) in Windows Vista. This was working fine until recently. Not sure what broke it...
Resetting TCP/IP using NetShell utility did not help.
Any help is highly appreciated.
The problem was with the AVG anti-virus program that is blocking the socket. Added an exception in the program for localhost to fix the problem.