citrus waitFor().file fails to read a file - citrus-framework

I’m trying to use waitFor() in my Citrustest to wait for an output file on disk to be written by the process I’m testing. I’ve used this code
outputFile = new File “/esbfiles/blesbt/bl03orders.99160221.14289.xml");
waitFor().file(outputFile).seconds(65L).interval(1000L);
after a few seconds, the file appears in the folder as expected. The user I’m running the test code as has permissions to read the file. The waitFor(), however, ends in a timeout.
09:46:44 09:46:44,818 DEBUG dition.FileCondition| Checking file path '/esbfiles/blesbt/bl03orders.99160221.14289.xml'
09:46:44 09:46:44,818 WARN dition.FileCondition| Failed to access file resource 'class path resource [esbfiles/blesbt/bl03orders.99160221.14289.xml] cannot be resolved to URL because it does not exist'
What could be the problem? Can’t I check for files outside the classpath?

This is actually a bug in Citrus. Citrus is working with the file path instead of the file object and in combination with Spring's PathMatchingResourcePatternResolver this causes Citrus to search for a classpath resource instead of using the absolute file path as external file system resource.
You can fix this by providing the absolute file path instead of the file object like this:
waitFor().file(“file:/esbfiles/blesbt/bl03orders.99160221.14289.xml")
.seconds(65L)
.interval(1000L);
Issue regarding broken file object conversion has been opened: https://github.com/christophd/citrus/issues/303
Thanks for pointing to it!

Related

Talend - how to configure tFileInputDelimited do not throw error when file not found

Good day,
I am using tFileInputDelimited in Talend Data Studio to read a txt file and get some value inside.
The input file name is something like follow, it contain day in the file name:
checksum_150123.txt
This file will create in last few steps before the job end and the file not found.
Thus, every day the job first run, there is no file exist, and then tFileInputDelimited will throw error on file not found.
C:\LandingZone\jx\checksum_180123.txt (The system cannot find the file specified)
[ERROR] 14:13:35 my_track.my_precheck_registration_0_1.DL_PRECHECK_REGISTRATION- CollectCheckSum_1_tFileInputDelimited_1 - C:\LandingZone\jx\checksum_180123.txt (The system cannot find the file specified)
I have a requirement to not showing this, may I know how can I configure this?
for that I recommend you to use the tFileExist component and then use the tFileExist variable Exist (((Boolean)globalMap.get("tFileExist_1_EXISTS")) for example) in a run if trigger
Hope this answers your question

SolrCloud Configset API upload returns 500 "KeeperErrorCode = NoNode"

Situation
First of all I must mention that I'm using Solr 8.1.1 and am running the default "solr -e cloud" to do some testing. This is running on a Windows Azure VM. I'm trying to create a PowerShell script that will do some setup on the SolrCloud. The first step in this is uploading a custom Configset. I was using https://lucene.apache.org/solr/guide/8_1/configsets-api.html as guide and the PowerShell command if you take away all the parameters and such boils down to the following:
Invoke-WebRequest -Uri "http://localhost:8983/solr/admin/configs?action=UPLOAD&name=MyConfig" -Method Post -ContentType "application/octet-stream" -InFile "config.zip"
EDIT: For clarity the contents of the ZIP is as follows: https://imgur.com/a/OHR1bf1
Problem
When I run the above command however I'm met with the following error:
Invoke-WebRequest : { "responseHeader":{ "status":500, "QTime":11}, "error":{ "msg":"KeeperErrorCode = NoNode for /configs/MyConfig/lang/contractions_ca.txt", "trace":"org.apache.zookeeper.KeeperException$NoNodeException:
KeeperErrorCode = NoNode for /configs/MyConfig/lang/contractions_ca.txt\r\n\tat org.apache.zookeeper.KeeperException.create(KeeperException.java:114)\r\n\tat
org.apache.zookeeper.KeeperException.create(KeeperException.java:54)\r\n\tat org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:792)\r\n\tat
org.apache.solr.common.cloud.SolrZkClient.lambda$create$7(SolrZkClient.java:415)\r\n\tat org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:71)\r\n\tat
org.apache.solr.common.cloud.SolrZkClient.create(SolrZkClient.java:415)\r\n\tat org.apache.solr.handler.admin.ConfigSetsHandler.createZkNodeIfNotExistsAndSetData(ConfigSetsHandler.java:201)\r\n\tat
org.apache.solr.handler.admin.ConfigSetsHandler.handleConfigUploadRequest(ConfigSetsHandler.java:181)\r\n\tat org.apache.solr.handler.admin.ConfigSetsHandler.handleRequestBody(ConfigSetsHandler.java:111)\r\n\tat
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)\r\n\tat org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:796)\r\n\tat
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:762)\r\n\tat org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:522)\r\n\tat
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:397)\r\n\tat org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:343)\r\n\tat
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)\r\n\tat org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)\r\n\tat
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)\r\n\tat org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\r\n\tat
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\r\n\tat org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)\r\n\tat
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)\r\n\tat org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)\r\n\tat
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)\r\n\tat org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)\r\n\tat
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)\r\n\tat org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)\r\n\tat
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)\r\n\tat org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)\r\n\tat
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)\r\n\tat org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)\r\n\tat
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)\r\n\tat org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\r\n\tat
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\r\n\tat org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\r\n\tat
org.eclipse.jetty.server.Server.handle(Server.java:502)\r\n\tat org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364)\r\n\tat org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)\r\n\tat
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)\r\n\tat org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)\r\n\tat
org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)\r\n\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)\r\n\tat
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)\r\n\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)\r\n\tat
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)\r\n\tat org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)\r\n\tat
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)\r\n\tat org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683)\r\n\tat java.lang.Thread.run(Thread.java:748)\r\n",
"code":500}}
At line:1 char:1
Observations
When I first "failed" I had created a zip file from my config which contained an additional top level folder (ea instead of MyConfig/solrconfig.xml etc my ZIP was MyConfig/MyConfig/solrconfig.xml) and when I used this the command was run successful but the second command (creating a collection) would fail because it could not find solrconfig.xml. This tells me that the ZIP is correctly present in the request and Solr does seem capable of processing it but once I correct it to an actual configset it massively fails?
EDIT: I was asked about this and whether using "conf" in the zip would work. As I mentioned here this results in a successful upload (https://imgur.com/a/JHLZ8td) however as you can see it does not match the other config sets and when you try to create a collection with this set you will get Error CREATEing SolrCore 'Test_shard1_replica_n1': Unable to create core [Test_shard1_replica_n1] Caused by: Can't find resource 'solrconfig.xml' in classpath or '/configs/Sitecore', cwd=C:\solr-8.1.1\server
Question(s)
What am I doing wrong? Is this a bug?
Going back through some work I did on SolrCloud a while ago, I am reminded of one annoying issue I hit:
I got odd issues uploading the schema config zip files if I had created that zip using "Send to Compressed Folder" in the Windows UI, or via Compress-Archive in PowerShell. I found that compressing the data with 7Zip did work, however.
I suspect there's something incompatible between the Windows zip code (which I think is quite old, and something they licensed ages ago?) and how Solr/ZooKeeper deals with extracting the files again?
I just ran into the same issue without using Windows zip code. I was trying to upload a configset to Solr 7.7.3 from a conf directory containing a "lang" subdirectory with a bunch of files. I got the NoNode error for /configs/_myconfigsetname_/lang/stopwords_eu.txt. The configset was being zipped on the fly through a recursive directory walk in Java, sending each filename to the Zip file using Java's ZipOutputStream. The resulting zipped bytes were then sent to Solr/Zookeeper.
This code worked fine for conf directories without subdirectories. It turned out that when there is a subdirectory, it was necessary to create a ZipEntry for the directory (e.g. lang/) before adding files to the Zip stream such as lang/stopwords_eu.txt.

Check for empty file and quit cakebuild

I am attempting to write a check in my Cake build script to pull in a file from BuildParameters and check if the file contents are empty -- if contents are empty, throw an exception and quit the build.
I am attempting to use FileReadText from the FileHelpers namespace but for some reason I cannot get my build to recognize the file command. I am following the syntax and documentation found here: https://cakebuild.net/api/Cake.FileHelpers/FileHelperAliases/97F5679A
Here is the code I am trying in build.cake:
var fileReadText= FileReadText(Parameters.TestParameters.TestListFP);
var fileText= fileReadText.ThrowIfNullOrEmpty(nameof(fileReadText));
The argument Parameters.TestParameters.TestListFP is set in my Parameters.cake file as such:
TestListFP = context.File("C:\Some\Path\some_file_name.txt");
Using the above code, I see this error:
error CS0103: The name 'FileReadText' does not exist in the current
context
Note that I do not have a ICakeContext in build.cake, just BuildParameters.
I tried to resolve the issue by adding using Cake.FileHelpers; at the top of my build.cake file, but then I see this error:
The type or namespace name 'FileHelpers' does not exist in the namespace 'Cake' (are you missing an assembly reference?)
The script works fine without my FileReadText code, so I know TestListFP is actually a valid file.
I think I am inherently misunderstanding how to use FileHelpers and FileReadText and I could not find any examples of usage in documentation or anywhere else. If anyone has guidance on how to use this method, or a better way to accomplish what I am trying to do, I would appreciate the help.
Have you added the #addin pre-processor directive, as mentioned here:
https://github.com/cake-contrib/Cake.FileHelpers/#cakefilehelpers
You can easily reference Cake.FileHelpers directly in your build script via a cake addin:
#addin "Cake.FileHelpers"

kdb q: find the C libraries it loads

I have this line of code:
loadedFunc: `:mylib 2:(`myfunc;1)
so from the kdb/q reference., it means loading "my func", which has one argument from the dynamic library with name mylib .
Where is the path can I locate this physical library of mylib? I don't see any path specified elsewhere..
The default path is set as mentioned here
It will attempt to load from the current working directory (as \pwd ) first.
If it doesn't find the appropriate library there, then it will attempt to load from the $QHOME/[installationType] directory (so C:/q/w32 by default for Windows 32bit, etc.)

Hadoop - Input path does not exist

I did set up the hadoop Ubuntu OS, followed all the necessary steps,
1.created the hdfs file system
2.Moved the text files to input directory
3.having privilege to access all the directories.
but when run the simple word count example, i got
Exception in thread "main" org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/user/root/input
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:224)
but, input path is valid and even can able view the files in that path from eclipse itself, so plz assist were i am wrong.
attached the screenshot for the reference
Add the following 2 lines in your code :
config.addResource(new Path("/HADOOP_HOME/conf/core-site.xml"));
config.addResource(new Path("/HADOOP_HOME/conf/hdfs-site.xml"));
Your client is looking into the local FS.
For hadoop-2.2.0 on windows 7, I added the below lines and it solved the issue (NOTE: My Hadoop Home is: C:\MyWork\MyProjects\Hadoop\hadoop-2.2.0)
Configuration conf = new Configuration();
conf.addResource(new Path("C:\MyWork\MyProjects\Hadoop\hadoop-2.2.0\etc\hadoop\core-site.xml"));
conf.addResource(new Path("C:\MyWork\MyProjects\Hadoop\hadoop-2.2.0\etc\hadoop\hdfs-site.xml"));