Migration from ActiveMQ "Classic" 5.15.9 to ActiveMQ Artemis 2.17.0 - activemq-artemis

Given simple configuration:
Server - ActiveMQ
Client1 - Producer. Sends messages using OPENWIRE protocol
Client2 - Consumer. Receives messages using STOMP protocol
It works fine as long as we're using ActiveMQ 5.15.9. But it does not work with Artemis.
In Artemis GUI I can see, that OPENWIRE protocol uses multicast queue with the name VirtualTopic.Some.Topic.Name and consumer with STOMP protocol uses multicast queue with the name /topic/VirtualTopic.Some.Topic.Name.
And when Client1 (producer) sends a message, I can see this DEBUG log entry:
2022-02-17 09:57:36,642 DEBUG [org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl] Couldn't find any bindings for address=VirtualTopic.Some.Topic.Name on message=CoreMessage[messageID=12884902991,durable=true,userID=a71cfe57-8fcf-11ec-8d24-e470b8b47a8a,priority=4, timestamp=Thu Feb 17 09:57:36 CET 2022,expiration=0, durable=true, address=VirtualTopic.Some.Topic.Name,size=1259,properties=TypedProperties[__AMQ_CID=ID:D118010-54523-637806885913733333-0:0,_AMQ_GROUP_SEQUENCE=0,__HDR_BROKER_IN_TIME=1645088256634,_AMQ_ROUTING_TYPE=0,__HDR_ARRIVAL=0,__HDR_COMMAND_ID=9,__HDR_PRODUCER_ID=[0000 003B 7B01 0027 4944 3A44 3131 3830 3130 2D35 3435 3233 2D36 3337 3830 ... 35 3931 3337 3333 3333 332D 313A 3000 0000 0000 0000 0100 0000 0000 0000 01),__HDR_MESSAGE_ID=[0000 004E 6E00 017B 0100 2749 443A 4431 3138 3031 302D 3534 3532 332D 3633 ... 0000 0000 0001 0000 0000 0000 0001 0000 0000 0000 0001 0000 0000 0000 0000),__HDR_DROPPABLE=false]]#7144975
2022-02-17 09:57:36,643 DEBUG [org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl] Message CoreMessage[messageID=12884902991,durable=true,userID=a71cfe57-8fcf-11ec-8d24-e470b8b47a8a,priority=4, timestamp=Thu Feb 17 09:57:36 CET 2022,expiration=0, durable=true, address=VirtualTopic.Some.Topic.Name,size=1259,properties=TypedProperties[__AMQ_CID=ID:D118010-54523-637806885913733333-0:0,_AMQ_GROUP_SEQUENCE=0,__HDR_BROKER_IN_TIME=1645088256634,_AMQ_ROUTING_TYPE=0,__HDR_ARRIVAL=0,__HDR_COMMAND_ID=9,__HDR_PRODUCER_ID=[0000 003B 7B01 0027 4944 3A44 3131 3830 3130 2D35 3435 3233 2D36 3337 3830 ... 35 3931 3337 3333 3333 332D 313A 3000 0000 0000 0000 0100 0000 0000 0000 01),__HDR_MESSAGE_ID=[0000 004E 6E00 017B 0100 2749 443A 4431 3138 3031 302D 3534 3532 332D 3633 ... 0000 0000 0001 0000 0000 0000 0001 0000 0000 0000 0001 0000 0000 0000 0000),__HDR_DROPPABLE=false]]#7144975 is not going anywhere as it didn't have a binding on address:VirtualTopic.Some.Topic.Name
Our use-case is that we have one backend server that sends events via ActiveMQ and X consumers. The decision to use virtual topics was made by another team. I do not really know, the way they consume events. On my side, I have a simulator, that consumes events and fakes that sub-system.
Is it possible to make Artemis work when clients are using different protocols?

I see you're using the /topic/ prefix for your STOMP consumer. I recommend you configure the appropriate anycastPrefix and multicastPrefix settings on the acceptor which the STOMP client is using. These settings control the semantics used by the broker when auto-creating addresses & queues and when routing messages. For example:
<acceptor name="stomp">tcp://0.0.0.0:61613?protocols=STOMP;useEpoll=true;anycastPrefix=/queue/;multicastPrefix=/topic/</acceptor>
See the documentation for more details.
The DEBUG messages you're seeing are expected when a message is sent to a multicast address and there are no queues bound to that address as is the case when a JMS client sends a message to a JMS topic and there are no JMS subscribers. I think you can safely ignore those messages.
Lastly, I recommend you move to ActiveMQ Artemis 2.20.0. There's been over 300 Jiras resolved between 2.17.0 and 2.20.0.

Related

A few odd things in a tcpdump capture for a database replication stream

I'm trying to resolve the following performance problem. There is a database which is synchronously replicated to a remote location via TCP. Currently, everything works great. But it's being migrated to new hardware, and a test load shows that everything slows down roughly by a factor of 2. Basically, the current setup supports sustained transfer rates of 200-300 MB/s whereas the new one gets 100-150MB/s at best, and it's not good enough for us.
There is nothing obviously wrong from the database side. Database instrumentation says that the source database is busy sending data on the network (by large chunks, tens of MB at a time), and the destination one is busy receiving it on the network. So I'm looking at the TCP packet capture in Wireshark and I notice a few things that look a bit odd in the new setup -- see a sample below.
AFAIK the window scaling factor is 7 for this conversation so receive window gets a x128 factor which means most of the time it's not a limiting factor.
First of all, most of the time there is only 1 packet in flight per every ACK which is not the case for the existing setup where I can see multiple bursts of tens of outgoing packets. Is this Nagle algorithm in action or something else? It's supposed to be off (there is a tcp nodelay option on the application level) but it's still a bit suspicious.
Second, I don't understand the timings. It's almost as if something is controlling the rate of outgoing packets and keeps it roughly to 1 packet every 50 us (sometimes a bit more, sometimes a bit less), rather than leaving within a couple of microseconds after getting an ACK. Could there be some sort of burst control in place or am I imagining things?
Third, segment size. Most of segments are 8kB as compared to existing setup where they are 64kB. We experimented with the application settings but we can't seem to be able to make a difference -- 64kB segments are there, but they are rare. Is there a way in Linux to strongly encourage larger segments?
36 2022-09-01 15:02:45.267111 192.168.240.122 192.168.240.115 TCP 8210 45508 → 1600 [PSH, ACK] Seq=2162935757 Ack=3197136358 Win=6166 Len=8156
37 2022-09-01 15:02:45.267115 192.168.240.115 192.168.240.122 TCP 54 1600 → 45508 [ACK] Seq=3197136358 Ack=2162943913 Win=24525 Len=0
38 2022-09-01 15:02:45.267162 192.168.240.122 192.168.240.115 TCP 8210 45508 → 1600 [PSH, ACK] Seq=2162943913 Ack=3197136358 Win=6166 Len=8156
39 2022-09-01 15:02:45.267166 192.168.240.115 192.168.240.122 TCP 54 1600 → 45508 [ACK] Seq=3197136358 Ack=2162952069 Win=24525 Len=0
40 2022-09-01 15:02:45.267212 192.168.240.122 192.168.240.115 TCP 8210 45508 → 1600 [PSH, ACK] Seq=2162952069 Ack=3197136358 Win=6166 Len=8156
41 2022-09-01 15:02:45.267215 192.168.240.115 192.168.240.122 TCP 54 1600 → 45508 [ACK] Seq=3197136358 Ack=2162960225 Win=24525 Len=0
42 2022-09-01 15:02:45.267261 192.168.240.122 192.168.240.115 TCP 8210 45508 → 1600 [PSH, ACK] Seq=2162960225 Ack=3197136358 Win=6166 Len=8156
43 2022-09-01 15:02:45.267265 192.168.240.115 192.168.240.122 TCP 54 1600 → 45508 [ACK] Seq=3197136358 Ack=2162968381 Win=24525 Len=0
44 2022-09-01 15:02:45.267313 192.168.240.122 192.168.240.115 TCP 8210 45508 → 1600 [PSH, ACK] Seq=2162968381 Ack=3197136358 Win=6166 Len=8156
45 2022-09-01 15:02:45.267318 192.168.240.115 192.168.240.122 TCP 54 1600 → 45508 [ACK] Seq=3197136358 Ack=2162976537 Win=24525 Len=0
46 2022-09-01 15:02:45.267342 192.168.240.122 192.168.240.115 TCP 8210 45508 → 1600 [PSH, ACK] Seq=2162976537 Ack=3197136358 Win=6166 Len=8156
47 2022-09-01 15:02:45.267346 192.168.240.115 192.168.240.122 TCP 54 1600 → 45508 [ACK] Seq=3197136358 Ack=2162984693 Win=24525 Len=0
48 2022-09-01 15:02:45.267391 192.168.240.122 192.168.240.115 TCP 8210 45508 → 1600 [PSH, ACK] Seq=2162984693 Ack=3197136358 Win=6166 Len=8156
Any suggestions are greatly appreciated.
Thanks!
Update: I've shared packet capture files on sender and receiver sides for both current setup and old setup at https://drive.google.com/drive/folders/1ktBDjRHOUCfia1kTfdVIQdS-Q1k4B3qn
Update2: I've written a blog entry about this investigation for those interested: https://savvinov.com/2022/09/20/use-of-packet-capture-and-other-advanced-tools-in-network-issues-troubleshooting/
Best regards,
Nikolai
While I couldn't find answers to all of my questions, I found the ones that mattered most.
It turned out that the TCP stack was sending data in 8kB segments because the "application" send it that way to it. By "application" I mean the replication software (Oracle Data Guard) that picked up a stream of database changes on the source database and wrote it to the remote standby.
So eventually I traced tcp_sendmsg using BCC trace.py utility and found that its segment size argument was about 8kB (8156 bytes to be more specific). Then I traced the network stack on the "application" level, forcing the connection to be re-established during the tracing, and it turned out to be that the parameter controlling the size of the transmission (SDU or session data unit) was supposed to be 64kB in settings, but in fact the new connection was using a smaller value, 8kB.
Further research showed that there was a number of oddities around the way this parameter is set, and also that the documentation around it was inaccurate.
When the correct way to set the value was found by trial and error, the throughput became immediately much better and all the bottlenecks that bothered us disappeared.
Best regards,
Nikolai

how works commit timestamp internally

those files are present in folder /pg_commit_ts
-rw-------. 1 postgres postgres 262144 Jun 17 12:56 0000
-rw-------. 1 postgres postgres 262144 Jun 17 12:56 0001
-rw-------. 1 postgres postgres 262144 Jun 17 12:57 0002
...
Are thoses files created only if track_commit_timestamp is on?
Yes, these files are only created if track_commit_timestamp = on. You cannot get the last committed statement, but you can use pg_last_committed_xact() to get the timestamp and transaction ID of the last committed transaction (see the documentation).

How to prevent emails being sent with fake accounts in domain

Short version: What can be done to prevent emails being sent from our SMTP mail server using fake accounts that do not really exist in the domain?
Longer version: We use Plesk to manage our site hosted on a Windows VPS. By enabling SMTP logging on MailEnable, I notice that a lot of emails are being sent with accounts that do not exist in the domain. I reproduce below a small portion of the log. Here stolav-gw4#ourDomain.com, tango#ourDomain.com are accounts that do not exist in our domain. What can be done to prevent such emails from being sent?
Things I have already tried and haven't stopped these:
I already have set the SPF record entry. The entry is: v=spf1 a mx -all
I have changed all the passwords. That hasn't helped.
I have enabled DKIM
I ran the following virus/malware detectors and they found nothing: VirusTotal Website Check, MSERT.exe from Microsoft, MSRT.exe from Microsoft
2021-02-17 06:00:02 212.70.149.71 SMTP-IN - our.ip.address.here 1228 AUTH {blank} 334+UGFzc3dvcmQ6 WIN-DFQOE4PNR36 18 38 stolav-gw4#ourDomain.com
2021-02-17 06:00:03 212.70.149.71 SMTP-IN - 104.128.234.235 1296 RSET RSET 250+Requested+mail+action+okay,+completed WIN-DFQOE4PNR36 43 6 -
2021-02-17 06:00:03 212.70.149.85 SMTP-IN - 104.128.234.235 1448 QUIT QUIT 221+Service+closing+transmission+channel WIN-DFQOE4PNR36 42 6 tango#ourDomain.com
2021-02-17 06:00:04 87.246.7.242 SMTP-IN - our.ip.address.here 1876 EHLO EHLO+User 250-ourDomain.com+[87.246.7.242],+this+server+offers+5+extensions WIN-DFQOE4PNR36 242 11 -
2021-02-17 06:00:04 212.70.149.85 SMTP-IN - our.ip.address.here 1848 AUTH {blank} 334+UGFzc3dvcmQ6 WIN-DFQOE4PNR36 18 34 tango#ourDomain.com
2021-02-17 06:00:04 212.70.149.71 SMTP-IN - our.ip.address.here 1228 AUTH c3RvbGF2LWd3NEAxMjM= 535+Invalid+Username+or+Password WIN-DFQOE4PNR36 34 22 stolav-gw4#ourDomain.com
2021-02-17 06:00:04 212.70.149.71 SMTP-IN - 104.128.234.235 1296 AUTH AUTH+LOGIN 334+VXNlcm5hbWU6 WIN-DFQOE4PNR36 18 12 -
2021-02-17 06:00:05 87.246.7.242 SMTP-IN - our.ip.address.here 1876 RSET RSET 250+Requested+mail+action+okay,+completed WIN-DFQOE4PNR36 43 6 -
2021-02-17 06:00:05 212.70.149.71 SMTP-IN - our.ip.address.here 1228 QUIT QUIT 221+Service+closing+transmission+channel WIN-DFQOE4PNR36 42 6 stolav-gw4#ourDomain.com
2021-02-17 06:00:05 212.70.149.85 SMTP-IN - our.ip.address.here 1848 AUTH Y3Zibm0xMjM= 535+Invalid+Username+or+Password WIN-DFQOE4PNR36 34
Start using a proper DMARC record in your DNS: https://www.linuxbabe.com/mail-server/create-dmarc-record
You would probably want the reject policy probably reject: tells receiving email servers to reject the email if DMARC check fails
Might want to read all the parts on that site. I used it once to setup my mail server and it's very informative.
That IP that abuses your mail is known for doing that. My logs:
Mar 25 04:34:12 main postfix/smtps/smtpd[35405]: warning: unknown[212.70.149.71]: SASL LOGIN authentication failed: UGFzc3dvcmQ6
Mar 25 04:34:18 main postfix/smtps/smtpd[35405]: lost connection after AUTH from unknown[212.70.149.71]
Mar 25 04:34:18 main postfix/smtps/smtpd[35405]: disconnect from unknown[212.70.149.71] ehlo=1 auth=0/1 rset=1 commands=2/3
Mar 25 04:35:27 main postfix/smtps/smtpd[35405]: connect from unknown[212.70.149.71]
Mar 25 04:35:37 main postfix/smtps/smtpd[35405]: Anonymous TLS connection established from unknown[212.70.149.71]: TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)
Mar 25 04:36:05 main postfix/smtps/smtpd[35405]: warning: unknown[212.70.149.71]: SASL LOGIN authentication failed: UGFzc3dvcmQ6
Mar 25 04:36:10 main postfix/smtps/smtpd[35405]: lost connection after AUTH from unknown[212.70.149.71]
Mar 25 04:36:10 main postfix/smtps/smtpd[35405]: disconnect from unknown[212.70.149.71] ehlo=1 auth=0/1 rset=1 commands=2/3
Mar 25 04:37:20 main postfix/smtps/smtpd[35405]: connect from unknown[212.70.149.71]
Mar 25 04:37:30 main postfix/smtps/smtpd[35405]: Anonymous TLS connection established from unknown[212.70.149.71]: TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)
Mar 25 04:37:58 main postfix/smtps/smtpd[35405]: warning: unknown[212.70.149.71]: SASL LOGIN authentication failed: UGFzc3dvcmQ6
Which is repeated many, many times. No e-mails are sent from that IP, though.
Tried blocking that IP in the firewall but that didn't seem to work :? - Would like to know why, though, so if anyone knows, would like to know!
Information about it may be on one of those pages. Not sure because it's been a while, and don't have the time myself at this precise moment to check it out.
You can use 3rd party programs;
RdpGuard detects and blocks invalid connection attempts (RDP, SMTP, POP ...) using Windows firewall
gykkSPAM (antispam filter) filters incoming and outgoing emails using local postoffices and authentication types

apache kafka NoReplicaOnlineException

Using Apache Kafka with a single node (1 Zookeeper, 1 Broker) I get this exception (repeated multiple times):
kafka.common.NoReplicaOnlineException: No replica in ISR for partition __consumer_offsets-2 is alive. Live brokers are: [Set()], ISR brokers are: [0]
What does it mean? Note, I am starting the KafkaServer programmatically, and I am able to send and consume from a topic using the CLI tools.
It seems I should tell this node that it is operation in standalone mode - how should I do this?
This seems to happen during startup.
Full exception:
17-11-07 19:43:44 NP-3255AJ193091.home ERROR [state.change.logger:107]
- [Controller id=0 epoch=54] Initiated state change for partition __consumer_offsets-16 from OfflinePartition to OnlinePartition failed
kafka.utils.ShutdownableThread.run ShutdownableThread.scala:
64
kafka.controller.ControllerEventManager$ControllerEventThread.doWork
ControllerEventManager.scala: 52
kafka.metrics.KafkaTimer.time KafkaTimer.scala: 31
kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply
ControllerEventManager.scala: 53 (repeats 2 times)
kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply$mcV$sp
ControllerEventManager.scala: 53
kafka.controller.KafkaController$Startup$.process
KafkaController.scala: 1581
kafka.controller.KafkaController.elect KafkaController.scala:
1681
kafka.controller.KafkaController.onControllerFailover
KafkaController.scala: 298
kafka.controller.PartitionStateMachine.startup
PartitionStateMachine.scala: 58
kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange
PartitionStateMachine.scala: 81
scala.collection.TraversableLike$WithFilter.foreach
TraversableLike.scala: 732
scala.collection.mutable.HashMap.foreach
HashMap.scala: 130
scala.collection.mutable.HashMap.foreachEntry
HashMap.scala: 40
scala.collection.mutable.HashTable$class.foreachEntry
HashTable.scala: 236
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply
HashMap.scala: 130 (repeats 2 times)
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply
TraversableLike.scala: 733
kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply
PartitionStateMachine.scala: 81
kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply
PartitionStateMachine.scala: 84
kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange
PartitionStateMachine.scala: 163
kafka.controller.PartitionStateMachine.electLeaderForPartition
PartitionStateMachine.scala: 303
kafka.controller.OfflinePartitionLeaderSelector.selectLeader
PartitionLeaderSelector.scala: 65
kafka.common.NoReplicaOnlineException: No replica in ISR for partition
__consumer_offsets-16 is alive. Live brokers are: [Set()], ISR brokers are: [0]

ColdFusion email settings with TLS

I am trying to configure ColdFusion to send emails using 1&1's servers (smtp.1and1.com) and even though I have set the username and password it keeps failing.
This is what I've done so far:
Set outgoing server to smtp.1and1.com
set username and password
set port to 587
selected Use TLS checkbox
selected Verify Settings box
when I click Save I get the message "Connection Verification Failed!"
In the ColdFusion log files in the mail.log I see this error:
"Error","scheduler-1","03/22/16","19:39:21",,"Can't send command to
SMTP host"
I ran WireShark and captured some packets and it seems it does connect to the server, some communication goes back and forth, and then it aborts.
Below is a sample of the capture:
No Time Protocol Length Info
1 0.000000 TCP 66 49858 ? 587 [SYN] Seq=0 Win=8192 Len=0 MSS=1460 WS=256 SACK_PERM=1
2 0.000567 TCP 66 587 ? 49858 [SYN, ACK] Seq=0 Ack=1 Win=29200 Len=0 MSS=1460 SACK_PERM=1 WS=512
3 0.000611 TCP 54 49858 ? 587 [ACK] Seq=1 Ack=1 Win=131328 Len=0
4 0.007028 SMTP 112 S: 220 perfora.net (mreueus002) Nemesis ESMTP Service ready
5 0.015100 SMTP 70 C: EHLO vm229CAC8
6 0.015556 TCP 60 587 ? 49858 [ACK] Seq=59 Ack=17 Win=29696 Len=0
7 0.015697 SMTP 159 S: 250 perfora.net Hello vm229CAC8 [**.**.**.**] | 250 SIZE 69920427 | 250 AUTH LOGIN PLAIN | 250 STARTTLS
8 0.019485 SMTP 64 C: STARTTLS
9 0.021416 SMTP 62 S: 220 OK
10 0.058490 TLSv1 132 Client Hello
11 0.059244 TLSv1 1514 Server Hello
12 0.059246 TCP 1514 [TCP segment of a reassembled PDU]
13 0.059283 TCP 54 49858 ? 587 [ACK] Seq=105 Ack=3092 Win=131328 Len=0
14 0.059308 TLSv1 710 Certificate
15 0.070314 TLSv1 61 Alert (Level: Fatal, Description: Certificate Unknown)
16 0.070368 TCP 54 49858 ? 587 [FIN, ACK] Seq=112 Ack=3748 Win=130560 Len=0
17 0.070858 TLSv1 61 Alert (Level: Fatal, Description: Internal Error)
18 0.070905 TCP 54 49858 ? 587 [RST, ACK] Seq=113 Ack=3755 Win=0 Len=0
19 0.071198 TCP 60 587 ? 49858 [FIN, ACK] Seq=3755 Ack=113 Win=29696 Len=0
All of which makes me think that there is something with the certificate (since it aborts before it even bothers with the username and password).
I've saved the 3 certificates from packet 14 and looked at them and they all seem fine - validity is OK, Thawte is the root CA - checked and confirmed the included one is OK, etc.
What am I missing? And are there any other log files that might shed some more light on this issue?
Thanks
I found it. It was the certificate.
ColdFusion runs on top of Java. Java has its own set of trusted root certificates. This server's root certificate wasn't there (hence why it wasn't trusted).
Solution essentially boiled down to:
Save the root certificate in a file
import it into the ColdFusion's java run-machine' trusted root certificates
restart ColdFusion so that it picks up the changes
The first step was easy - I expanded the 14th packet within WireShark, there were 3 certificates in it, saved them as 1.cer 2.cer and 3.cer files (it was 3.cer which had just the root one). I guess I could've visited any of 1&1's web pages via https and grabbed it, but wasn't sure if they'll use the same root CA. So extracting it from the actual packet seemed like the safer option.
ColdFusion was installed in C:\ColdFusion\ and to find out which Java runtime it starts I looked under C:\ColdFusion\bin\cfstart.bin which had was referring to ..\runtime\bin\jrun -start coldfusion.
Its Java run-machine had the certificates stored in C:\ColdFusion\runtime\jre\lib\security\cacerts
What remained was how to import it in that keystore - I used portecle as suggested here.
After restarting ColdFusion and asking it politely to verify the settings it confirmed them and I saw the below log in WireShark:
No. Time Protocol Length Info
104 3.895581 TCP 66 55157 ? 587 [SYN] Seq=0 Win=8192 Len=0 MSS=1460 WS=256 SACK_PERM=1
105 3.896180 TCP 66 587 ? 55157 [SYN, ACK] Seq=0 Ack=1 Win=29200 Len=0 MSS=1460 SACK_PERM=1 WS=512
106 3.896229 TCP 54 55157 ? 587 [ACK] Seq=1 Ack=1 Win=131328 Len=0
107 3.902608 SMTP 112 S: 220 perfora.net (mreueus003) Nemesis ESMTP Service ready
108 3.903791 SMTP 70 C: EHLO vm229CAC8
109 3.904271 TCP 60 587 ? 55157 [ACK] Seq=59 Ack=17 Win=29696 Len=0
110 3.904390 SMTP 159 S: 250 perfora.net Hello vm229CAC8 [**.**.**.**] | 250 SIZE 69920427 | 250 AUTH LOGIN PLAIN | 250 STARTTLS
111 3.904532 SMTP 64 C: STARTTLS
112 3.906347 SMTP 62 S: 220 OK
118 4.112009 TCP 62 [TCP Retransmission] 587 ? 55157 [PSH, ACK] Seq=164 Ack=27 Win=29696 Len=8
119 4.112057 TCP 66 55157 ? 587 [ACK] Seq=27 Ack=172 Win=131072 Len=0 SLE=164 SRE=172
120 4.115457 TLSv1 132 Client Hello
121 4.116154 TLSv1 1514 Server Hello
122 4.116157 TCP 1514 [TCP segment of a reassembled PDU]
123 4.116158 TLSv1 710 Certificate
124 4.116201 TCP 54 55157 ? 587 [ACK] Seq=105 Ack=3748 Win=131328 Len=0
125 4.156467 TLSv1 321 Client Key Exchange
127 4.196201 TCP 60 587 ? 55157 [ACK] Seq=3748 Ack=372 Win=30720 Len=0
128 4.196237 TLSv1 97 Change Cipher Spec, Encrypted Handshake Message
129 4.196799 TCP 60 587 ? 55157 [ACK] Seq=3748 Ack=415 Win=30720 Len=0
130 4.197005 TLSv1 97 Change Cipher Spec, Encrypted Handshake Message
131 4.197742 TLSv1 91 Application Data
132 4.198262 TLSv1 166 Application Data
133 4.198550 TLSv1 87 Application Data
134 4.199201 TLSv1 93 Application Data
135 4.199677 TLSv1 117 Application Data
136 4.200122 TLSv1 93 Application Data
137 4.200345 TLSv1 101 Application Data
138 4.240137 TCP 60 587 ? 55157 [ACK] Seq=3981 Ack=595 Win=30720 Len=0
143 4.448738 TLSv1 105 Application Data
154 4.652126 TCP 105 [TCP Retransmission] 587 ? 55157 [PSH, ACK] Seq=3981 Ack=595 Win=30720 Len=51
155 4.652153 TCP 66 55157 ? 587 [ACK] Seq=595 Ack=4032 Win=131072 Len=0 SLE=3981 SRE=4032
and also tried sending a few test emails and everything worked as expected.
Thanks for everyone's help and suggestions! :)
p.s. And I found also the backup option. Turns out 1&1 does support TLS but does not require it. Plain old SMTP with no TLS worked just fine on port 587.
I discovered this accidentally - it is probably a bug in ColdFusion (version 9 in my case). In ColdFusion's Server Settings > Mail > Undelivered Mail I told it to resend a failed email. And it did - but without attempting the TLS part.