I'm trying to crate a macro that could display different levels of details in a Resume, the idea being to be able to state a specific theme and to get details only for relevant entries in my resume.
I'm using class "memoir" to use the \newcomment function. I tried moderncv, but I was not really convinced.
Here is what I've come up with so far :
\newcomment{Item}
\newcomment{Descr}
\newcomment{Details}
\newcommand{\cvitem}[3]{
\begin{Item}\textbf{#1}\end{Item}
\begin{Descr}\hspace{1cm} {#2}\end{Descr}
\begin{Details}\\ {\small #3}\end{Details}\vspace{2em}
}
\commentsoff{Item}
\commentsoff{Descr}
\commentsoff{Details}
It works as is, but if I state
\commentson{Details}
Then I get the error :
! File ended while scanning use of \next.
<inserted text>
\par
<*> cv_master.tex
I suspect you have forgotten a `}', causing me
to read past where you wanted me to stop.
I'll try to recover; but if the error is serious,
you'd better type `E' or `X' now and fix your file.
Any idea why ?
You're better of using conditionals in the traditional sense. That is, use \if-like statements that you can toggle on/off:
\documentclass{article}
\newif\ifItem
\newif\ifDescr
\newif\ifDetails
\newcommand{\cvitem}[3]{%
\ifItem
\textbf{#1}
\fi
\ifDescr
\hspace{1cm} #2
\fi
\ifDetails
\\ {\small #3}
\fi
\vspace{2em}
}
\Itemtrue
\Descrtrue
\Detailsfalse
\begin{document}
\cvitem{First}{Second}{Third}
\end{document}
Why? It's easier and works in all environments/classes.
Related
I have an XSLT template that transforms this:
<WGS__LAT>N20340000</WGS__LAT>
to this:
<latitude>
<deg>20</deg>
<min>34</min>
<sec>00</sec>
<hSec>00</hSec>
<northSouth>North</northSouth>
</latitude>
I wrote an XSpec scenario to test the XSLT template:
<x:scenario label="location/latitude: Check that the XSLT assigns latitude the split-up value of WGS__LAT">
<x:context>
<WGS__LAT>N20340000</WGS__LAT>
</x:context>
<x:expect label="Expect: deg=20, min=34, sec=00, hSec=00, northSouth=North">
<latitude>
<deg>20</deg>
<min>34</min>
<sec>00</sec>
<hSec>00</hSec>
<northSouth>North</northSouth>
</latitude>
</x:expect>
</x:scenario>
I am certain that my XSLT template works correctly, so why does the XSpec tool report FAILED? I thought it might have something to do with whitespace in <x:expect> so I removed all whitespace:
<x:expect label="Expect: deg=20, min=34, sec=00, hSec=00, northSouth=North">
<latitude><deg>20</deg><min>34</min><sec>00</sec><hSec>00</hSec><northSouth>North</northSouth></latitude>
</x:expect>
Unfortunately, I still get FAILED. What am I doing wrong?
Without having the full report, and especially what you actually get, it's difficult to answer. But :
in that case, a going starting point is to wrap your context in another tag, and to set in the xspec which part of the result should match. Something like :
<x:context>
<foo>
<WGS__LAT>N20340000</WGS__LAT>
</foo>
</x:context>
<x:expect label="..." test="/foo/*" as="element(latitude)">
<latitude>
<deg>20</deg>
<min>34</min>
<sec>00</sec>
<hSec>00</hSec>
<northSouth>North</northSouth>
</latitude>
</x:expect>
You may have a look at wiki for more precise use of expectations.
Then, actual result and expected result are compared with fn:deep-equals(...) method. And so, there is no output processing on result or on expect. Il you use an #href on x:context or on x:expect, then there is a space-normalization, but I'm not perfectly sure of that.
You may have a look at this issue on indentation problems ; it's a quite long and still open thread.
I have been trying to use automatic replace function to place entries starting with italics on their own lines, so for example
lähteä60*F【动】
1. 离开, 出发, 走掉líkāi, chūfā, zǒu diào(poistua). Vieraat lähtevät.客人走了kèrén zǒule.Juna lähtee raiteelta kaksi. 火车两点离站huǒchē liǎng diǎn lí zhàn.
would turn into
1. 离开, 出发, 走掉líkāi, chūfā, zǒu diào(poistua).
Vieraat lähtevät.客人走了kèrén zǒule.
Juna lähtee raiteelta kaksi. 火车两点离站huǒchē liǎng diǎn lí zhàn.
However, for some reason unknown to me, the replace function (cf. screenshot) breaks up the lines like this:
lähteä60*F【动】
1. 离开, 出发, 走掉líkāi, chūfā, zǒu diào(poistua).
Vieraat lähtev
ä
t.客人走了kèrén zǒule.
Juna lähtee raiteelta kaksi. 火车两点离站huǒchē liǎng diǎn lí zhàn.
(so the first line "Vieraat lähtevät.客人走了kèrén zǒule." is broken up.)
As far as I can tell, it should be all italics, so I have no idea why it breaks up and no way to examine what's the problem. Trying to save to different formats doesn't seem to help either. There's thousands of pages of this stuff, so the automatic function is really required.
A small sample file of the stuff can be downloaded from here:
http://shakki.info/test.docx.
Some screenshots of the problem:
I'm quite new to ELK and Grok-filtering, and I'm struggling with parsing this particular pattern in my grok filter.
I've used the grok debugger to try and solve this, but although I like the tool, I just get confused by the custom patterns.
Eventually, I hope to parse lots of log files sent by filebeat to logstash, then send the parsed logs to elasticsearch and display with kibana or some similar visualization tool.
The lines that I need to parse follow the following pattern:
1310 2017-01-01 16:48:54 [325:51] [326:49] [359:57] Some log info text
The first four digits is a log type identifier, and will be used for grouping. I've called the field "LogLineID".
The date is formatted YYYY-MM-DD HH:MM:SS, and is parsed ok. I called the field "LogDate".
But now the problem begins. Within the square brackets, I have counters, formatted as MM:SS if you like. I cannot for the life of me find a way to sort these out, but I need to compare these times, hence I want to store them as minutes and seconds, not just numbers.
The first is a counter "TimeSpent",
the second is a counter "TimeStarted" and
the third is a counter "TimeSinceDown".
Then, last, comes the info text, which I've managed to grok with simply applying %{GREEDYDATA:LogInfo}.
I notice that the amount of minutes could be far higher than the standard 60 minutes within an hour, so I may be barking up the wrong tree here trying to parse it with date patterns such as TIMESTAMP_ISO8601, but then, I don't really know how else to do this.
So, I came this far:
%{NUMBER:LogLineID} %{TIMESTAMP_ISO8601:LogDate}
and were as mentioned able to (by cutting away the square bracket parts) to parse the log info text with
%{GREEDYDATA:LogInfo}
to create a field LogInfo.
But that's were I'm stuck. Could someone please help me figure out the rest?
Massive thanks in advance.
PS! I also found %{NUMBER:duration}, but it could as far as I could tell only parse timestamps with dot, not colon..
grok regex expression can help you solve the problem.
but first I wanna make sure that do you mean [325:51] [326:49] [359:57] are the three component that you wanna to fetch? And it will returns the result like :
TimeSpent: 325:51
TimeStarted: 326:49
TimeSinceDown: 359:57
were i get the point , you can use my ways in on of the following suggestions:
define your own custom pattern files and add the pattern in your file.
just use the expression in filter part of logstash conf file
hope it will helps you
Ah, there was a space.. Actually, I was misleading myself and everybody in my question, as it was not actually that log line that was causing problems. I just took the first one, not realizing where the problem really were, but the one causing problems had a space within the brackets as such: [ 42:31]. There are also some parts where there are two spaces, so the way I managed to solve this was to include a %{SPACE} between the \[ and the %{NUMBER}:
%{NUMBER:LogLineID} %{TIMESTAMP_ISO8601:LogDate} \[%{SPACE}%{NUMBER:TimeSpentMinutes}\:%{NUMBER:TimeSpentSeconds}\] \[%{SPACE}%{NUMBER:TimeStartedMinutes}\:%{NUMBER:TimeStartedSeconds}\] \[%{SPACE}%{NUMBER:TimeSinceDownMinutes}\:%{NUMBER:TimeSinceDownSeconds}\] %{GREEDYDATA:LogText}
I still haven't solved the merging of minutes and seconds, but this I can also handle in a later stage.
Thanks to Lin Don for showing an interest in my problem, and sorry for not replying sooner.
Hope the solution will help others (or even myself) if their stuck on the same kind of problem.
Note to myself: Read the logs more carefully before grok'ing.. :)
I'm toying with some introspection in Swift and it seems like if you want to get the class of an object in a printable version, these are the best options. (introduced in beta 6.0).
_stdlib_getTypeName(someClass)
_stdlib_getDemangledTypeName(someClass) // A slightly cleaner version
I was hoping to find other introspection methods, but unfortunately, command clicking the methods take me to the Swift header and they're not declared there.
My other option would be to type _stdlib and wait for autocomplete or control space to see my options. Unfortunately, none of these methods autocomplete.
Is there a file where these and other stdlib functions are declared, or is there documentation for these methods anywhere?
I found the answer to my question via a tips and tricks blog post from realm here -- notably, the post by JP Simard.
The best way to see other methods along these lines is to go to your terminal and type:
cd `xcode-select -p`/Toolchains/XcodeDefault.xctoolchain/usr/lib/swift/macosx
And then enter the following:
nm -a libswiftCore.dylib | grep "T _swift_stdlib"
This will give you a readout of all available functions that looks something like this:
00000000001a43c0 T _swift_stdlib_NSObject_isEqual
00000000001a4490 T _swift_stdlib_NSStringHasPrefixNFD
00000000001a44f0 T _swift_stdlib_NSStringHasSuffixNFD
00000000001a4450 T _swift_stdlib_NSStringNFDHashValue
00000000001a2650 T _swift_stdlib_atomicCompareExchangeStrongPtr
00000000001a2670 T _swift_stdlib_atomicCompareExchangeStrongUInt32
00000000001a2690 T _swift_stdlib_atomicCompareExchangeStrongUInt64
00000000001a2700 T _swift_stdlib_atomicFetchAddUInt32
00000000001a2710 T _swift_stdlib_atomicFetchAddUInt64
00000000001a26f0 T _swift_stdlib_atomicLoadPtr
00000000001a26d0 T _swift_stdlib_atomicLoadUInt32
00000000001a26e0 T _swift_stdlib_atomicLoadUInt64
00000000001a26b0 T _swift_stdlib_atomicStoreUInt32
00000000001a26c0 T _swift_stdlib_atomicStoreUInt64
00000000001a4410 T _swift_stdlib_compareNSStringDeterministicUnicodeCollation
000000000017c560 T _swift_stdlib_conformsToProtocol
00000000001a5a80 T _swift_stdlib_demangleName
000000000017c8e0 T _swift_stdlib_dynamicCastToExistential1
000000000017c6f0 T _swift_stdlib_dynamicCastToExistential1Unconditional
00000000001a5910 T _swift_stdlib_getTypeName
I haven't found any documentation, but a lot of these function names are fairly explanatory and one can always discover a lot through trying them out!
All answers are good, but the result of second step that we can not use. We dont even know is this function usable or correct...
I've been trapped in these result about 1 day.
Finally I dump all funcitons & symbols for stdlib from libswiftcore.dylib, i found this..
command:
nm libswiftcore.dylib | grep "_stdlib_"
We can find one line from result:
00000000000b2ca0 T __TFSs19_stdlib_getTypeNameU__FQ_SS
Remove first underscore "_" then we get this:
_TFSs19_stdlib_getTypeNameU__FQ_SS
Maybe we can view this website to understand the meaning of "_TFSs19_stdlib_getTypeNameU__FQ_SS",
But I think we can get the correct function description faster!!
So, we demangle like this below in xcode lldb window:
(lldb) p _stdlib_demangleName("_TFSs19_stdlib_getTypeNameU__FQ_SS")
(String) $R0 = "Swift._stdlib_getTypeName <A>(A) -> Swift.String"
Finally we can expose more undocumented functions in swift that we never seen before, we can try another one that we never heard like this:
(lldb) p _stdlib_demangleName("_TFSs24_stdlib_atomicLoadARCRefFT6objectGVSs20UnsafeMutablePointerGSqPSs9AnyObject____GSqPS0___")
(String) $R1 = "Swift._stdlib_atomicLoadARCRef (object : Swift.UnsafeMutablePointer<Swift.Optional<Swift.AnyObject>>) -> Swift.Optional<Swift.AnyObject>"
All clear~ Thank god!!
Share this to you, wish it can help~
:D
I have a makefile with the following format. First I define what my outputs are;
EXEFILES = myexe1.exe myexe2.exe
Then I define what the dependencies are for those outputs;
myexe1.exe : myobj1.obj
myexe2.exe : myobj2.obj
Then I have some macros that define extra dependencies for linking;
DEP_myexe1 = lib1.lib lib2.lib
DEP_myexe2 = lib3.lib lib4.lib
Then I have the target for transforming .obj to .exe;
$(EXEFILES):
$(LINK) -OUT:"Exe\$#" -ADDOBJ:"Obj\$<" -IMPLIB:$($($(DEP_$*)):%=Lib\\%)
What I want to happen is (example for myexe1.exe)
DEP_$* -> DEP_myexe1
$(DEP_myexe1) -> lib1.lib lib2.lib
$(lib1.lib lib2.lib:%=Lib\\%) -> Lib\lib1.lib Lib\lib2.lib
Unfortunately this is not working. When I run make --just-print, the -IMPLIB: arguments are empty. However, if I run $(warning DEP_$*) I get
DEP_myexe1
And when I run $(warning $(DEP_myexe1)) I get
lib1.lib lib2.lib
So for some reason, make does not like the combination of $(DEP_$*). Perhaps it cannot resolve macro names dynamically like this. What can I do to get this to work? Is there an alternative?
Where does $(warning DEP_$*) give you DEP_myexe1 as output exactly? Because given your makefile above it shouldn't.
$* is the stem of the target pattern that matched. In your case, because you have explicit target names, you have no patten match and so no stem and so $* is always empty.
Additionally, you are attempting a few too many expansions. You are expanding $* to get myexe1 directly (assuming for the moment that variable works the way you intended). You then prefix that with DEP_ and used $(DEP_$*) to get the lib1.lib lib2.lib. You then expand that result $($(DEP_$*)) and then expand that (empty) result again (to do your substitution) $($($(DEP_$*)):%=Lib\\%).
You want to either use $(#:.exe=) instead of $* in your rule body or use %.exe as your target and then use $* to get myexe1/myexe2.
You then want to drop two levels of expansion from $($($(DEP_$*)):%=Lib\\%) and use $(DEP_$*:%=Lib\\%) instead.
So (assuming you use the pattern rule) you end up with:
%.exe:
$(LINK) -OUT:"Exe\$#" -ADDOBJ:"Obj\$<" -IMPLIB:$(DEP_$*:%=Lib\\%)
I managed to get it working without needing to resolve macros in the way described above. I modified the linking dependencies like this;
myexe1.exe : myobj1.obj lib1.lib lib2.lib
myexe2.exe : myobj2.obj lib3.lib lib4.lib
Then I need to filter these files by extension in the target recipe;
$(EXEFILES):
$(LINK) -OUT:"$(EXE_PATH)\$#" -ADDOBJ:$(patsubst %, Obj\\%, $(filter %.obj, $^)) -IMPLIB:$(patsubst %, Lib\\%, $(filter %.lib, $^))
The $(pathsubst ...) is used to prepend the path that the relevant files are in.
In the case of myexe1.exe, the link command expands to;
slink -OUT:"Exe\myexe1.exe" -ADDOBJ: Obj\myexe1.obj -IMPLIB: Lib\lib1.lib Lib\lib2.lib
Out of interest's sake, I would still like to know if it is possible to resolve macro names like in the question.