Is it possible to exclude items from Xcode's code completion? - swift

One of the worst offenders is NSAccessibilityProtocol, for example.
The protocol declares lots of members (26 functions)
Its conformed to by some of the most common object types (e.g. NSView, NSWindow) conform to it.
NSObject even has these methods! (Even though it doesn't conform to NSAccessibilityProtocol explicitly, for some reason)
The function names contain common words like "frame", "style", "range", "size", "button", "menu", etc., so they come up really frequently as false positives (because of to fuzzy searching)
Most (all?) of the members are deprecated
Is it possible to filter out these members from code completion? When I do need them, I would rather just look them up in the API docs manually, if it meant that they wouldn't disturb me the other 95% of the time.
Here are some similar questions on the topic, none of which are up to date and high quality:
How can I exclude certain terms from Xcode's code completion? (auto complete, content assist, suggest was to use AppCode, obviously not exactly applicable
Is there a way to improve Xcode's code completion?, old (2014), suggestion is to use a plugin (Xcode discontinued its support for plugins)
Is there a way to disable or modify Xcode's code completion?, applies to an old Xcode version. /Applications/Xcode.app/Contents/PlugIns/TextMacros.xctxtmacro doesn't exist on my machine (Xcode Version 12.4 (12D4e))

Related

Why does String.contains behave differently when I import Foundation?

Just started learning Swift, am really confused about the following behaviour.
This is what I get when I run String.contains without Foundation:
"".contains("") // true
"a".contains("") // true
"a".contains("a") // true
"" == "" // true
And this is what I get with Foundation:
import Foundation
"".contains("") // false
"a".contains("") // false
"a".contains("a") // true
"" == "" // true
Why are the results different depending on whether I import Foundation? Are there other such differences, and is there an exhaustive list somewhere? Didn't find anything in the Foundation documentation, but this seems important to document. I'm only aware of this other example.
Also: How does this happen and is it normal? I understand that Swift has stuff like extensions that change the behaviour of every instance of something once they're included, but surely that should only add behaviour, not change existing behaviour. And if existing behaviour is changed, shouldn't the language indicate this somehow, like make me use a different type if I want the different behaviour?
Basically this is the same as the question I answer here.
Foundation is not part of Swift, it's part of Cocoa, the older Objective-C library that preceded Swift by many, many years. Foundation's version of a string is NSString. But Swift String is "bridged" to NSString, so as soon as you import Foundation, a bunch of NSString methods spring to life as if they were part of Swift String even though they are not. In your case, you actually end up calling a completely different method which, as you've discovered, gives different results.
A good way to see this is to command-click on the term contains in your code (or even better, option-click it and then click Open in Developer Documentation):
If you have not imported Foundation (or UIKit), you jump to Swift String's contains.
If you have imported Foundation, you jump to Foundation's contains.
As for this part:
shouldn't the language indicate this somehow
I'm afraid Stack Overflow is not very good on "should" questions. I would say, yes, this is a little maddening, but it's part of the price we pay for the easy and seamless integration of Swift into the world of Cocoa programming. You could argue that the Swift people should not have named their method contains, but that train has left the station, and besides, it's a name that perfectly indicates what the method does.
Another thing to keep in mind is that you would probably never really use Swift except in the presence of Foundation (perhaps because you're in the presence of UIKit or SwiftUI or AppKit) so in practical terms the issue wouldn't arise. You've hit an unusual edge-case, which is commendable but, ex hypothesi, unusual.
To make things even more complicated, I think the Swift library method you encountered may have been just introduced as part of Xcode 14 and Swift 5.7 etc. See https://developer.apple.com/videos/play/wwdc2022/110354/?time=1415 for the WWDC '22 discussion of new String features. In earlier versions of Xcode, the phrase "a".contains("") would not even have compiled in the absence of Foundation — and so the problem would never have arisen!

Switch behavior when new enum values are added

Note: this is no longer relevant. Recent versions of Swift have multiple features that address enum binary compatibility in various ways, such as #unknown default, frozen enums, etc.
Various enums in HealthKit tend to get new values added with each release of iOS. For example, HKWorkoutActivityType has had new values added in each iOS version since its introduction.
Say I am mapping this enum to a string value using a Swift switch statement:
extension HKWorkoutActivityType {
var displayName: String {
switch self {
case .americanFootball: return "American Football"
// ...Exhaustive switch statement, with a line for every enum case.
// Including values added in iOS 10 and 11...
case .taiChi: return "Tai Chi"
}
}
}
let event: HKWorkoutEvent = ...
print("Activity type is: \(event.type.displayName)")
This switch statement, compiled against the iOS 11 SDK, works fine and is backward compatible with older iOS versions. Note that at the time of compilation, the switch statement is exhaustive, so there is no default case.
But if new HKWorkoutActivityType values are added in iOS 12, and I don't recompile this code, how will the displayName getter behave for new enum values? Should I expect a crash? Undefined behavior? Does it depend on the type of enum (for example, here it's an Objective-C NS_ENUM, but will Swift enums behave differently)? etc.
FWIW, this is the partially what this Swift Evolution proposal is addressing.
Hopefully they will decide on a solution that satisfies issues like this nicely too!
Long story short, you may be able to avoid this issue by adding a default case, (even though the compiler will yell at you), or using version tags. However this problem likely falls under "undefined" currently.
The long story:
The current version of Swift does not have ABI stability, so a compiled Swift application is not guaranteed to (and almost definitely wont) interface with a Framework compiled with a newer version (the reason the platform Frameworks are still Objective-C).
So how this category of changes affect Swift is a work in progress. We will probably have a better definition of how to deal with this type of issue when Swift 5 is released. until then adding default and/or version checking is probably the way to go.
Very interesting question, and upvoted. I know of no way to perfectly test this in (a) Xcode 9 and (b) iOS 11. But that may be your answer.
I think the desired solution if if #available(iOS 12, *), where though is at issue. Encapsulate the entire switch statement? Just the iOS 12 addition?
The result should be that between the target iOS version in Xcode and the Swift compiler, it's covered - and should yield an error (hopefully explaining the issue that iOS 11 is targeted but something is only available in iOS 12) to indicate you either need to use (a) the if #available(iOS 12, *) someplace or change your target.
I know of no easy way to test this though, without rebuilding. Which is integral to your question! Therefore I guess the rule is:
Always rebuild your app when a new iOS (and associated Xcode) version is released.
Consider this part of you taking ownership of your code.

Idiomatic Rust plugin system

I want to outsource some code for a plugin system. Inside my project, I have a trait called Provider which is the code for my plugin system. If you activate the feature "consumer" you can use plugins; if you don't, you are an author of plugins.
I want authors of plugins to get their code into my program by compiling to a shared library. Is a shared library a good design decision? The limitation of the plugins is using Rust anyway.
Does the plugin host have to go the C way for loading the shared library: loading an unmangled function?
I just want authors to use the trait Provider for implementing their plugins and that's it.
After taking a look at sharedlib and libloading, it seems impossible to load plugins in a idiomatic Rust way.
I'd just like to load trait objects into my ProviderLoader:
// lib.rs
pub struct Sample { ... }
pub trait Provider {
fn get_sample(&self) -> Sample;
}
pub struct ProviderLoader {
plugins: Vec<Box<Provider>>
}
When the program is shipped, the file tree would look like:
.
├── fancy_program.exe
└── providers
├── fp_awesomedude.dll
└── fp_niceplugin.dll
Is that possible if plugins are compiled to shared libs? This would also affect the decision of the plugins' crate-type.
Do you have other ideas? Maybe I'm on the wrong path so that shared libs aren't the holy grail.
I first posted this on the Rust forum. A friend advised me to give it a try on Stack Overflow.
UPDATE 3/27/2018:
After using plugins this way for some time, I have to caution that in my experience things do get out of sync, and it can be very frustrating to debug (strange segfaults, weird OS errors). Even in cases where my team independently verified the dependencies were in sync, passing non-primitive structs between the dynamic library binaries tended to fail on OS X for some reason. I'd like to revisit this, find what cases it happens in, and perhaps open an issue with Rust, but I'm going to advise caution with this going forward.
LLDB and valgrind are near-essential to debug these issues.
Intro
I've been investigating things along these lines myself, and I've found there's little official documentation for this, so I decided to play around!
First let me note, as there is little official word on these properties please do not rely on any code here if you're trying to keep planes in the air or nuclear missiles from errantly launching, at least not without doing far more comprehensive testing than I've done. I'm not responsible if the code here deletes your OS and emails an erroneous tearful confession of committing the Zodiac killings to your local police; we're on the fringes of Rust here and things could change from one release or toolchain to another.
I have personally tested this on Rust 1.20 stable in both debug and release configurations on Windows 10 (stable-x86_64-pc-windows-msvc) and Cent OS 7 (stable-x86_64-unknown-linux-gnu).
Approach
The approach I took was a shared common crate both crates listed as a dependency defining common struct and trait definitions. At first, I was also going to test having a struct with the same structure, or trait with the same definitions, defined independently in both libraries, but I opted against it because it's too fragile and you wouldn't want to do it in a real design. That said, if anybody wants to test this, feel free to do a PR on the repository above and I will update this answer.
In addition, the Rust plugin was declared dylib. I'm not sure how compiling as cdylib would interact, since I think it would mean that upon loading the plugin there are two versions of the Rust standard library hanging around (since I believe cdylib statically links the Rust stdlib into the shared object).
Tests
General Notes
The structs I tested were not declared #repr(C). This could provide an extra layer of safety by guaranteeing a layout, but I was most curious about writing "pure" Rust plugins with as little "treating Rust like C" fiddling as possible. We already know you can use Rust via FFI by wrapping things in opaque pointers, manually dropping, and such, so it's not very enlightening to test this.
The function signature I used was pub fn foo(args) -> output with the #[no_mangle] directive, it turns out that rustfmt automatically changes extern "Rust" fn to simply fn. I'm not sure I agree with this in this case since they are most certainly "extern" functions here, but I will choose to abide by rustfmt.
Remember that even though this is Rust, this has elements of unsafety because libloading (or the unstable DynamicLib functionality) will not type check the symbols for you. At first I thought my Vec test was proving you couldn't pass Vecs between host and plugin until I realized on one end I had Vec<i32> and on the other I had Vec<usize>
Interestingly, there were a few times I pointed an optimized test build to an unoptimized plugin and vice versa and it still worked. However, I still can't in good faith recommending building plugins and host applications with different toolchains, and even if you do, I can't promise that for some reason rustc/llvm won't decide to do certain optimizations on one version of a struct and not another. In addition, I'm not sure if this means that passing types through FFI prevents certain optimizations such as Null Pointer Optimizations from occurring.
You're still limited to calling bare functions, no Foo::bar because of the lack of name mangling. In addition, due to the fact that functions with trait bounds are monomorphized, generic functions and structs are also out. The compiler can't know you're going to call foo<i32> so no foo<i32> is going to be generated. Any functions over the plugin boundary must take only concrete types and return only concrete types.
Similarly, you have to be careful with lifetimes for similar reasons, since there's no static lifetime checking Rust is forced to believe you when you say a function returns &'a when it's really &'b.
Native Rust
The first tests I performed were on no custom structures; just pure, native Rust types. This would give a baseline for if this is even possible. I chose three baseline types: &mut i32, &mut Vec, and Option<i32> -> Option<i32>. These were all chosen for very specific reasons: the &mut i32 because it tests a reference, the &mut Vec because it tests growing the heap from memory allocated in the host application, and the Option as a dual purpose of testing passing by move and matching a simple enum.
All three work as expected. Mutating the reference mutates the value, pushing to a Vec works properly, and the Option works properly whether Some or None.
Shared Struct Definition
This was meant to test if you could pass a non-builtin struct with a common definition on both sides between plugin and host. This works as expected, but as mentioned in the "General Notes" section, can't promise you Rust won't fail to optimize and/or optimize a structure definition on one side and not another. Always test your specific use case and use CI in case it changes.
Boxed Trait Object
This test uses a struct whose definition is only defined on the plugin side, but implements a trait defined in a common crate, and returns a Box<Trait>. This works as expected. Calling trait_obj.fun() works properly.
At first I actually anticipated there would be issues with dropping without making the trait explicitly have Drop as a bound, but it turns out Drop is properly called as well (this was verified by setting the value of a variable declared on the test stack via raw pointer from the struct's drop function). (Naturally I'm aware drop is always called even with trait objects in Rust, but I wasn't sure if dynamic libraries would complicate it).
NOTE:
I did not test what would happen if you load a plugin, create a trait object, then drop the plugin (which would likely close it). I can only assume this is potentially catastrophic. I recommend keeping the plugin open as long as the trait object persists.
Remarks
Plugins work exactly as you'd expect just linking a crate naturally, albeit with some restrictions and pitfalls. As long as you test, I think this is a very natural way to go. It makes symbol loading more bearable, for instance, if you only need to load a new function and then receive a trait object implementing an interface. It also avoids nasty C memory leaks because you couldn't or forgot to load a drop/free function. That said, be careful, and always test!
There is no official plugin system, and you cannot do plugins loaded at runtime in pure Rust. I saw some discussions about doing a native plugin system, but nothing is decided for now, and maybe there will never be any such thing. You can use one of these solutions:
You can extend your code with native dynamic libraries using FFI. To use the C ABI, you have to use repr(C), no_mangle attribute, extern etc. You will find more information by searching Rust FFI on the internets. With this solution, you must use raw pointers: they come with no safety guarantee (i.e. you must use unsafe code).
Of course, you can write your dynamic library in Rust, but to load it and call the functions, you must go through the C ABI. This means that the safety guarantees of Rust do not apply there. Furthermore, you cannot use the highest level Rust's functionalities as trait, enum, etc. between the library and the binary.
If you do not want this complexity, you can use a language adapted to expand Rust: with which you can dynamically add functions to your code and execute them with same guarantees as in Rust. This is, in my opinion, the easier way to go: if you have the choice, and if the execution speed is not critical, use this to avoid tricky C/Rust interfaces.
Here is a (not exhaustive) list of languages that can easily extend Rust:
Gluon, a functional language like Haskell
Dyon, a small but powerful scripting language intended for video games
Lua with rlua or hlua
You can also use Python or Javascript, or see the list in awesome-rust.

Refactoring solutions for Swift on Xcode

Well, Xcode 8 is out and unfortunately refactoring is still not available for Swift ( Apple (ಠ_ಠ) ).
I'm trying to list all the available options for me to perform complex refactoring (renaming classes, properties used outside of the classes, methods, properties that have name collision with other properties etc..)
What I've come up until now is as follows:
Manually - not an option for me (big and nested project, could result in shorter life span)
Search and replace - better option, but still involves some heavy manual labor (going through each search result and applying replace, because selecting replace all is very risky on just raw text search)
Using AppCode refactoring option - this option seem to be the most promising, but it's seems buggy (not replacing some of the occurrences), also the program doesn't support Xcode 8, and it costs money (I'm on the 30 trial).
all in all, this is the best fail safe solution I can think of.
I was wondering if maybe someone else has other good ways to accomplish complex refactoring on Swift, maybe something that feel safe and robust?
Thanks!

Are there guidelines for updating C++Builder applications for C++Builder 2009?

I have a range of Win32 VCL applications developed with C++Builder from BCB5 onwards, and want to port them to ECB2009 or whatever it's now called.
Some of my applications use the old TNT/TMS unicode components, so I have a good mix of AnsiStrings and WideStrings throughout the code. The new version introduces UnicodeString, and a bunch of #defines that change the way functions like c_str behave.
I want to modify my code in a way that is as backwards-compatible as possible, so that the same code base can still be compiled and run (in a non-unicode fashion) on BCB2007 if necessary.
Particular areas of concern are:
Passing strings to/from Win32 API
functions
Interop with TXMLDocument
'Raw' strings used for RS232 comms, etc.
Rather than knife-and-fork the changes, I'm looking for guidelines that I can apply to ease the migration, while keeping backwards compatibility wherever possible.
If no such guidelines already exist, maybe we can formulate some here?
The biggest issue is compatibility for C++Builder 2009 and previous versions, the Unicode differences are some, but the project configuration files have changed as well. From the discussions I've been following on the CodeGear forums, there are not a whole lot of choices in the matter.
I think the first place to start, if you have not done so, is the C++Builder 2009 release notes.
The biggest thing seen has been the TCHAR mapping (to wchar or char); using the STL string varieties may be a help, since they shouldn't be very different between the two versions. The mapping existed in C++Builder 2007 as well (with the tchar header).
For any code that does not need to be explicitally Ansi or explitically Unicode, you should consider using the System::String, System::Char, and System::PChar typedefs as much as possible. That will help ease a lot of migration, and they work in previous versions.
When passing a System::String to an API function, you have to take into account the new "TCHAR maps to" setting in the Project options. If you try to pass AnsiString::c_str() when "TCHAR maps to" is set to "wchar_t", or UnicodeString::c_str() when "TCHAR maps to" is set to "char", you will have to perform appropriate typecasts. If you have "TCHAR maps to" set to "wchar_t". Technically, UnicodeString::t_str() does the same thing as TCHAR does in the API, however t_str() can be very dangerous if you misuse it (when "TCHAR maps to" is set to "char", t_str() transforms the UnicodeString's internal data to Ansi).
For "raw" strings, you can use the new RawByteString type (though I do not recommend it), or TBytes instead (which is an array of bytes - recommended). You should not be using Ansi/Wide/UnicodeString for non-character data to begin with. Most people used AnsiString as makeshift data buffers in past versions. Do not do that anymore. This is particularly important because AnsiString is now codepage-aware, and thus your data might get converted to other codepages when you least expect it.