Flux/Publisher on main thread - reactive-programming

I'm new to reactive programming/project-reactor, trying to understand the concepts. Created a Flux with range method and subscribed. When I look at the log, everything is running on main thread.
Flux
.range(1, 5)
.log()
.subscribe(System.out::println);
System.out.println("End of Execution");
[DEBUG] (main) Using Console logging [ INFO] (main) |
onSubscribe([Synchronous Fuseable] FluxRange.RangeSubscription) [
INFO] (main) | request(unbounded) [ INFO] (main) | onNext(1) 1 [ INFO]
(main) | onNext(2) 2 [ INFO] (main) | onNext(3) 3 [ INFO] (main) |
onNext(4) 4 [ INFO] (main) | onNext(5) 5 [ INFO] (main) | onComplete()
End of Execution
Once Publisher is done with the emission of all elements, then only the rest of the code got executed(System.out.println("End of Execution"); in the above example). Publisher will block the thread by default? If I change the scheduler, seems it's not blocking the thread.
Flux
.range(1, 5)
.log()
.subscribeOn(Schedulers.elastic())
.subscribe(System.out::println);
System.out.println("End of Execution");
Thread.sleep(10000);
[DEBUG] (main) Using Console logging End of Execution [ INFO]
(elastic-2) | onSubscribe([Synchronous Fuseable]
FluxRange.RangeSubscription) [ INFO] (elastic-2) | request(unbounded)
[ INFO] (elastic-2) | onNext(1) 1 [ INFO] (elastic-2) | onNext(2) 2 [
INFO] (elastic-2) | onNext(3) 3 [ INFO] (elastic-2) | onNext(4) 4 [
INFO] (elastic-2) | onNext(5) 5 [ INFO] (elastic-2) | onComplete()

Reactor does not enforce a concurrency model by default and yes, many operators will continue the work on the Thread where the subscribe() operation happened.
But this doesn't mean that using Reactor will block the main thread. The sample you're showing is doing in-memory work, no I/O or latency involved. Also, it's subscribing right away on the result.
You can try the following snippet and see something different:
Flux.range(1, 5)
.delayElements(Duration.ofMillis(100))
.log()
.subscribe(System.out::println);
System.out.println("End of Execution");
In the logs, I'm seeing:
INFO --- [main] reactor.Flux.ConcatMap.1 : onSubscribe(FluxConcatMap.ConcatMapImmediate)
INFO --- [main] reactor.Flux.ConcatMap.1 : request(unbounded)
End of Execution
In this case, delaying elements will schedule work in a different way - and since nothing here is keeping the JVM alive, the application exits and no element from the range is consumed.
In a more common scenario, I/O and latency will be involved and that work will be scheduled in appropriate ways and will not block the main application thread.

Related

Troubleshooting high time taken to publish event to Kafka

Publishing to kafka takes 3 seconds in one of the environment, however we see that, in other environments it takes only 20 milli seconds .
We have done following to troubleshoot this issue:
We have run route trace and we dont see an network latency.
Analyzed the logs and found following :
2023-01-05T07:26:24.627Z DEBUG 40 --- [ad | producer-5] o.a.k.c.NetworkClient : [Producer clientId=producer-5] Using older server API v7 to send PRODUCE {acks=-1,timeout=30000,partitionSizes=[esd.mob.vocc.datacollector.sampio.plab01.raw-9=464]} with correlation id 1156 to node 46
2023-01-05T07:26:24.627Z TRACE 40 --- [ad | producer-5] o.a.k.c.p.i.Sender : [Producer clientId=producer-5] Sent produce request to 46: (type=ProduceRequest, acks=-1, timeout=30000, partitionRecords=({esd.mob.vocc.datacollector.sampio.plab01.raw-9=MemoryRecords(size=464, buffer=java.nio.HeapByteBuffer[pos=0 lim=464 cap=464])}), transactionalId=''
2023-01-05T07:26:27.700Z TRACE 40 --- [ad | producer-5] o.a.k.c.NetworkClient : [Producer clientId=producer-5] Completed receive from node 46 for PRODUCE with correlation id 1156, received {responses=[{topic=esd.mob.vocc.datacollector.sampio.plab01.raw,partition_responses=[{partition=9,error_code=0,base_offset=31376,log_append_time=-1,log_start_offset=31001}]}],throttle_time_ms=0}
2023-01-05T07:26:27.700Z TRACE 40 --- [ad | producer-5] o.a.k.c.p.i.Sender : [Producer clientId=producer-5] Received produce response from node 46 with correlation id 1156
the log clearly states that it takes 3 seconds to publish the message .
We are using below mentioned code .
CompletableFuture<SendResult<String, String>> sendResultCompletableFuture = kafkaTemplate.send(payloadMsg);
sendResultCompletableFuture.whenComplete((data, ex) -> {
if (Objects.nonNull(ex)) {
LOGGER.error("Exception while publishing message to kafka, {}", ExceptionUtils.getStackTrace(ex));
throw new KafkaConnectorException(ExceptionUtils.getStackTrace(ex), topic, message);
} else {
LOGGER.info("Message handle published to kafka successfully for key ::");
}
});
Please help in troubleshooting this issue further and share thoughts.

Does the VSCode problem matcher strip ANSI escape sequences before matching?

I'm making a custom task to run Perl unit tests with yath. The output of that command contains details about failed tests, which I would like to filter and display as problems.
I've written the following matcher for the my output.
"problemMatcher": {
"owner": "yath",
"fileLocation": [ "relative", "${workspaceFolder}" ],
"severity": "error",
"pattern": [
{
"regexp": "\\[\\s*FAIL\\s*\\]\\s*job\\s*\\d+\\s*\\+?\\s*(.+)",
"message": 1,
},{
"regexp": "\\(\\s*DIAG\\s*\\)\\s*job\\s*\\d+\\s*\\+?\\s*at (.+) line (\\d+)\\.",
"file": 1,
"line": 2
}
]
}
This is supposed to match two different lines in the following output, which I will present as code for copying, and as a screenshot.
** Defaulting to the 'test' command **
( LAUNCH ) job 1 t/foo.t
( NOTE ) job 1 Seeded srand with seed '20220414' from local date.
[ PASS ] job 1 + passing test
[ FAIL ] job 1 + failing test
( DIAG ) job 1 Failed test 'failing test'
( DIAG ) job 1 at t/foo.t line 57.
[ PLAN ] job 1 Expected assertions: 2
( FAILED ) job 1 t/foo.t
( TIME ) job 1 Startup: 0.30841s | Events: 0.01992s | Cleanup: 0.00417s | Total: 0.33250s
< REASON > job 1 Test script returned error (Err: 1)
< REASON > job 1 Assertion failures were encountered (Count: 1)
The following jobs failed:
+--------------------------------------+-----------------------------------+
| Job ID | Test File |
+--------------------------------------+-----------------------------------+
| e7aee661-b49f-4b60-b815-f420d109457a | t/foo.t |
+--------------------------------------+-----------------------------------+
Yath Result Summary
-----------------------------------------------------------------------------------
Fail Count: 1
File Count: 1
Assertion Count: 2
Wall Time: 0.74 seconds
CPU Time: 0.76 seconds (usr: 0.20s | sys: 0.00s | cusr: 0.49s | csys: 0.07s)
CPU Usage: 103%
--> Result: FAILED <--
But it's actually pretty with colours.
I suspect there are ANSI escape sequences in this output. I could pass a flag to yath to make it not print colours, but I would like to be able to read this output as well, so that isn't ideal.
Do I have to change my pattern to match the escape sequences (I can read the source of the program that prints them, but it's annoying), or are they in fact stripped out and my pattern is wrong, but I can't see where?
Here's the first pattern as a regex101 match, and here's the second.

Pytest: How to display failed assertion only once, not twice

I run pytest via PyCharm, and execute a single test:
/home/guettli/projects/lala-env/bin/python /snap/pycharm-professional/230/plugins/python/helpers/pycharm/_jb_pytest_runner.py --target test_models.py::test_address_is_complete
Testing started at 11:53 ...
Launching pytest with arguments test_models.py::test_address_is_complete in /home/guettli/projects/lala-env/src/lala/lala/tests
============================= test session starts ==============================
platform linux -- Python 3.8.5, pytest-6.2.0, py-1.10.0, pluggy-0.13.1 -- /home/guettli/projects/lala-env/bin/python
cachedir: .pytest_cache
django: settings: mysite.settings (from ini)
rootdir: /home/guettli/projects/lala-env/src/lala, configfile: pytest.ini
plugins: django-4.1.0
collecting ... collected 1 item
test_models.py::test_address_is_complete Creating test database for alias 'default' ('test_lala')...
Operations to perform:
Synchronize unmigrated apps: allauth, colorfield, debug_toolbar, google, messages, staticfiles
Apply all migrations: account, admin, auth, contenttypes, lala, sessions, sites, socialaccount
Synchronizing apps without migrations:
Creating tables...
Running deferred SQL...
Running migrations:
Applying contenttypes.0001_initial... OK
Applying auth.0001_initial... OK
Applying account.0001_initial... OK
Applying account.0002_email_max_length... OK
Applying admin.0001_initial... OK
Applying admin.0002_logentry_remove_auto_add... OK
Applying admin.0003_logentry_add_action_flag_choices... OK
Applying contenttypes.0002_remove_content_type_name... OK
Applying auth.0002_alter_permission_name_max_length... OK
Applying auth.0003_alter_user_email_max_length... OK
Applying auth.0004_alter_user_username_opts... OK
Applying auth.0005_alter_user_last_login_null... OK
Applying auth.0006_require_contenttypes_0002... OK
Applying auth.0007_alter_validators_add_error_messages... OK
Applying auth.0008_alter_user_username_max_length... OK
Applying auth.0009_alter_user_last_name_max_length... OK
Applying auth.0010_alter_group_name_max_length... OK
Applying auth.0011_update_proxy_permissions... OK
Applying auth.0012_alter_user_first_name_max_length... OK
Applying lala.0001_initial... OK
Applying lala.0002_offer_price... OK
Applying lala.0003_order_amount... OK
Applying lala.0004_auto_20201215_2043... OK
Applying lala.0005_auto_20201229_2148... OK
Applying lala.0006_auto_20201229_2150... OK
Applying lala.0007_auto_20210117_1632... OK
Applying lala.0008_auto_20210117_1632... OK
Applying lala.0009_add_address... OK
Applying lala.0010_auto_20210117_2102... OK
Applying lala.0011_auto_20210119_1909... OK
Applying lala.0012_allergen_short... OK
Applying lala.0013_auto_20210119_1914... OK
Applying lala.0014_auto_20210120_0734... OK
Applying lala.0015_auto_20210120_0752... OK
Applying lala.0016_auto_20210120_1923... OK
Applying lala.0017_allergenuser... OK
Applying lala.0018_address_place... OK
Applying lala.0019_auto_20210126_2027... OK
Applying lala.0020_auto_20210126_2027... OK
Applying lala.0021_recurringoffer_days... OK
Applying lala.0022_auto_20210126_2129... OK
Applying lala.0023_auto_20210201_2056... OK
Applying lala.0024_globalconfig_navbar_title... OK
Applying lala.0025_activationstate... OK
Applying sessions.0001_initial... OK
Applying sites.0001_initial... OK
Applying sites.0002_alter_domain_unique... OK
Applying socialaccount.0001_initial... OK
Applying socialaccount.0002_token_max_lengths... OK
Applying socialaccount.0003_extra_data_default_dict... OK
Destroying test database for alias 'default' ('test_lala')...
FAILED
lala/tests/test_models.py:18 (test_address_is_complete)
user = <User: Dr. Foo>
def test_address_is_complete(user):
address = user.address
> assert address.is_complete
E assert False
E + where False = <Address: Address object (1)>.is_complete
test_models.py:21: AssertionError
Assertion failed
Assertion failed
=================================== FAILURES ===================================
___________________________ test_address_is_complete ___________________________
user = <User: Dr. Foo>
def test_address_is_complete(user):
address = user.address
> assert address.is_complete
E assert False
E + where False = <Address: Address object (1)>.is_complete
test_models.py:21: AssertionError
=========================== short test summary info ============================
FAILED test_models.py::test_address_is_complete - assert False
============================== 1 failed in 2.88s ===============================
Process finished with exit code 1
Assertion failed
Assertion failed
Why does the exception gets displayed twice?

Yocto(Zeus) Perf Build fails

I want to build perf on Yocto (Zeus branch), for an image without python2. The recipe is this one:
https://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta/recipes-kernel/perf/perf.bb?h=zeus-22.0.4
Running this recipe yields this error:
| ERROR: Execution of '/home/yocto/poseidon-build/tmp/work/imx6dl_poseidon_revb-poseidon-linux-gnueabi/perf/1.0-r9/temp/run.do_compile.19113' failed with exit code 1:
| make: Entering directory '/home/yocto/poseidon-build/tmp/work/imx6dl_poseidon_revb-poseidon-linux-gnueabi/perf/1.0-r9/perf-1.0/tools/perf'
| BUILD: Doing 'make -j4' parallel build
| Warning: arch/x86/include/asm/disabled-features.h differs from kernel
| Warning: arch/x86/include/asm/required-features.h differs from kernel
| Warning: arch/x86/include/asm/cpufeatures.h differs from kernel
| Warning: arch/arm/include/uapi/asm/perf_regs.h differs from kernel
| Warning: arch/arm64/include/uapi/asm/perf_regs.h differs from kernel
| Warning: arch/powerpc/include/uapi/asm/perf_regs.h differs from kernel
| Warning: arch/x86/include/uapi/asm/perf_regs.h differs from kernel
| Warning: arch/x86/include/uapi/asm/kvm.h differs from kernel
| Warning: arch/x86/include/uapi/asm/kvm_perf.h differs from kernel
| Warning: arch/x86/include/uapi/asm/svm.h differs from kernel
| Warning: arch/x86/include/uapi/asm/vmx.h differs from kernel
| Warning: arch/powerpc/include/uapi/asm/kvm.h differs from kernel
| Warning: arch/s390/include/uapi/asm/kvm.h differs from kernel
| Warning: arch/s390/include/uapi/asm/kvm_perf.h differs from kernel
| Warning: arch/s390/include/uapi/asm/sie.h differs from kernel
| Warning: arch/arm/include/uapi/asm/kvm.h differs from kernel
| Warning: arch/arm64/include/uapi/asm/kvm.h differs from kernel
| Warning: arch/x86/lib/memcpy_64.S differs from kernel
| Warning: arch/x86/lib/memset_64.S differs from kernel
|
| Auto-detecting system features:
| ... dwarf: [ on ]
| ... dwarf_getlocations: [ on ]
| ... glibc: [ on ]
| ... gtk2: [ OFF ]
| ... libaudit: [ OFF ]
| ... libbfd: [ on ]
| ... libelf: [ on ]
| ... libnuma: [ OFF ]
| ... numa_num_possible_cpus: [ OFF ]
| ... libperl: [ OFF ]
| ... libpython: [ on ]
| ... libslang: [ on ]
| ... libcrypto: [ on ]
| ... libunwind: [ on ]
| ... libdw-dwarf-unwind: [ on ]
| ... zlib: [ on ]
| ... lzma: [ on ]
| ... get_cpuid: [ OFF ]
| ... bpf: [ on ]
|
| Makefile.config:352: DWARF support is off, BPF prologue is disabled
| Makefile.config:547: Missing perl devel files. Disabling perl scripting support, please install perl-ExtUtils-Embed/libperl-dev
| Makefile.config:594: Python 3 is not yet supported; please set
| Makefile.config:595: PYTHON and/or PYTHON_CONFIG appropriately.
| Makefile.config:596: If you also have Python 2 installed, then
| Makefile.config:597: try something like:
| Makefile.config:598:
| Makefile.config:599: make PYTHON=python2
| Makefile.config:600:
| Makefile.config:601: Otherwise, disable Python support entirely:
| Makefile.config:602:
| Makefile.config:603: make NO_LIBPYTHON=1
| Makefile.config:604:
| Makefile.config:605: *** . Stop.
| Makefile.perf:205: recipe for target 'sub-make' failed
| make[1]: *** [sub-make] Error 2
| Makefile:68: recipe for target 'all' failed
| make: *** [all] Error 2
| make: Leaving directory '/home/yocto/poseidon-build/tmp/work/imx6dl_poseidon_revb-poseidon-linux-gnueabi/perf/1.0-r9/perf-1.0/tools/perf'
| WARNING: exit code 1 from a shell command.
|
ERROR: Task (/home/yocto/sources/poky/meta/recipes-kernel/perf/perf.bb:do_compile) failed with exit code '1'
NOTE: Tasks Summary: Attempted 1947 tasks of which 1946 didn't need to be rerun and 1 failed.
Looking at the recipe, libpython seems to be set?:
PACKAGECONFIG ??= "scripting tui libunwind"
PACKAGECONFIG[dwarf] = ",NO_DWARF=1"
PACKAGECONFIG[scripting] = ",NO_LIBPERL=1 NO_LIBPYTHON=1,perl python3"
# gui support was added with kernel 3.6.35
# since 3.10 libnewt was replaced by slang
# to cover a wide range of kernel we add both dependencies
PACKAGECONFIG[tui] = ",NO_NEWT=1,libnewt slang"
PACKAGECONFIG[libunwind] = ",NO_LIBUNWIND=1 NO_LIBDW_DWARF_UNWIND=1,libunwind"
PACKAGECONFIG[libnuma] = ",NO_LIBNUMA=1"
PACKAGECONFIG[systemtap] = ",NO_SDT=1,systemtap"
PACKAGECONFIG[jvmti] = ",NO_JVMTI=1"
# libaudit support would need scripting to be enabled
PACKAGECONFIG[audit] = ",NO_LIBAUDIT=1,audit"
PACKAGECONFIG[manpages] = ",,xmlto-native asciidoc-native"
Why does it not pick up the flag?
PACKAGECONFIG has scripting in it by default.
PACKAGECONFIG options are defined as following:
PACKAGECONFIG[f1] = "--with-f1, \
--without-f1, \
build-deps-for-f1, \
runtime-deps-for-f1, \
runtime-recommends-for-f1, \
packageconfig-conflicts-for-f1 \
"
PACKAGECONFIG[scripting] is set to ",NO_LIBPERL=1 NO_LIBPYTHON=1,perl python3". See the first comma here? It means that what is defined after is when scripting is not selected.
So if you do not want python dependency to be pulled, just set PACKAGECONFIG to a value without scripting in it.
Though I'm actually surprised the default does not build, that's definitely something that is tested by autobuilders. There's probably something else going on?
c.f.: https://www.yoctoproject.org/docs/latest/mega-manual/mega-manual.html#var-PACKAGECONFIG

Not running RabbitMQ on Linux, can not find the file asn1.app

I installed on CentOs successfully ever. However, here is another CentOs I used, and it failed to stared rabbitMq.
My erlang from here.
[rabbitmq-erlang]
name=rabbitmq-erlang
baseurl=https://dl.bintray.com/rabbitmq/rpm/erlang/20/el/7
gpgcheck=1
gpgkey=https://dl.bintray.com/rabbitmq/Keys/rabbitmq-release-signing-key.asc
repo_gpgcheck=0
enabled=1
this is my erl_crash.dump.
erl_crash_dump:0.5
Sat Jun 23 09:17:30 2018
Slogan: init terminating in do_boot ({error,{no such file or directory,asn1.app}})
System version: Erlang/OTP 20 [erts-9.3.3] [source] [64-bit] [smp:24:24] [ds:24:24:10] [async-threads:384] [hipe] [kernel-poll:true]
Compiled: Tue Jun 19 22:25:03 2018
Taints: erl_tracer,zlib
Atoms: 14794
Calling Thread: scheduler:2
=scheduler:1
Scheduler Sleep Info Flags: SLEEPING | TSE_SLEEPING | WAITING
Scheduler Sleep Info Aux Work:
Current Port:
Run Queue Max Length: 0
Run Queue High Length: 0
Run Queue Normal Length: 0
Run Queue Low Length: 0
Run Queue Port Length: 0
Run Queue Flags: OUT_OF_WORK | HALFTIME_OUT_OF_WORK
Current Process:
=scheduler:2
Scheduler Sleep Info Flags:
Scheduler Sleep Info Aux Work: THR_PRGR_LATER_OP
Current Port:
Run Queue Max Length: 0
Run Queue High Length: 0
Run Queue Normal Length: 0
Run Queue Low Length: 0
Run Queue Port Length: 0
Run Queue Flags: OUT_OF_WORK | HALFTIME_OUT_OF_WORK | NONEMPTY | EXEC
Current Process: <0.0.0>
Current Process State: Running
Current Process Internal State: ACT_PRIO_NORMAL | USR_PRIO_NORMAL | PRQ_PRIO_NORMAL | ACTIVE | RUNNING | TRAP_EXIT | ON_HEAP_MSGQ
Current Process Program counter: 0x00007fbd81fa59c0 (init:boot_loop/2 + 64)
Current Process CP: 0x0000000000000000 (invalid)
how to identify this problem ? Thank you.