Link one target with multiply other targets - swift

I've this setup in my Podfile file
target "A" do
pod "Pod1"
target "ATests" do
pod "Pod2"
end
end
target "B" do
pod "Pod3"
target "BTests" do
pod "Pod4"
end
end
target "SharedTests" do
pod "Pod5"
end
How do I link targets ATests and BTests with SharedTests but not A and B. Was hoping to use something like link_with, but that seems to be long gone.
My goal is to share code, not necessarily pods, between the test targets in my application.

Related

How to remove pod spec file from CocoaPods / Specs

I created a public pod , now I wanted to delete the pod spec because of some security concerns
pod trunk delete PODNAME VERSION
Care must be taken while doing this. If someone is using your spec file, you can damage the person's project after this process.

if we want to make modification to running pods configuration, which is advisable whether to deployment or to pods?

If we have some requirement to modify property of running pods, Which will be the recommeneded way and whats the reason.
I guess once a pod deployed as part of the deployment, we can modify the pods properties either by kubectl edit pod or by kubectl edit deploy.
Would like to understand is there any difference between these 2 actions. ?
Modify the Deployment not the Pod.
Why?
The Deployment describe the desired state for your pods. The Deployment controller continuously watches for the Deployment object in a control loop. It reads the desired pod state from the Deployment specification and try to ensure the state in the cluster. So, if you edit the pod and change something, the Deployment controller will overwrite the change in next resync because your modification is not present in the Deployment specification.
For the most part you can't edit the pods. In the API definition of a PodSpec, the containers and initContainers fields are both described as "Cannot be updated." Almost all of the interesting things in a Pod spec are in the Container sub-objects.
The corollary to this is that you can't "modify properties of running pods" for the most part; you can only delete and replace them with new pods with the properties you want. If you edit the pod template in a deployment spec, Kubernetes will do exactly that.

Replace the image on one pod manually, while other pods uses the main image

Let's say I have 10 pods running a stable version, and I wish to replace the image of one of them to run a newer version before a full rollout.
Is there a way to do that?
Not as such: every pod managed by a Deployment is expected to be identical, including running the same image. You can't change a pod's image once it's been created, and if you change the Deployment's image, it will try to recreate all of its managed pods.
If the only thing you're worried about is the pod starting up, the default behavior of a deployment is to start 25% of its specified replicas with the new image. The old pods will continue running uninterrupted until the new replicas successfully start and pass their readiness checks. If the new pods immediately go into CrashLoopBackOff state, the old pods will still be running.
If you want to start a pod specifically as a canary deployment, you can create a second Deployment to handle that. You'll need to include some label on the pods (for instance, canary: 'true') where you can distinguish the canary from main pods. This would be present in the pod spec, and in the deployment selector, but it would not be present in the corresponding Service selector: the Service matches both canary and non-canary pods. If this runs successfully then you can remove the canary Deployment and update the image on the main Deployment.
Like the other answer mentioned, It sounds like you are talking about a canary deployment. You can do this with Kubernetes and also with Istio. I prefer Istio as it gives you some great control over traffic weighting. I.e you could send 1% of traffic to the canary and 99% to the control. Great for testing in production. It also lets you route using HTTP headers.
https://istio.io/latest/blog/2017/0.1-canary/
If you want to do it with k8s just create two deployments with unique deployment names (myappv1 & myappv2 for example) with the same app= label. Then you can just create a service with the selector = whatever your app label is. The svc will round robin between the two v1 and v2 deployments.

Unable to find a specification for a local (private) pod

I created two local pods: MainPods & ChildPod like this:
Create Framework (in Xcode)
pod init & pod install for MainPod & ChildPod
pod spec create MainPod & pod spec create MainPod (in their folders)
modify it
Then I inited pod in root folder of application:
platform :ios, '13.0'
target 'PrivatePodBugDemo' do
use_frameworks!
# pod 'ChildPod', :path => 'DevPods/ChildPod'
pod 'MainPod', :path => 'DevPods/MainPod'
end
MainPod.podspec
ChildPod.podspec
What did you expect to happen?
I expected to all run good when I include only MainPod which already includes ChildPod.
What happened instead?
This error happened
[!] Unable to find a specification for ChildPod depended upon by MainPod
It's disappears when I uncomment pod 'ChildPod', :path => 'DevPods/ChildPod' in app's Podfile.
But I need to not include ChildPod in app's Podfile.
Project that demonstrates the issue
https://github.com/ocbnishi/PrivatePodBugDemo
https://github.com/CocoaPods/CocoaPods/issues/9803

kubernetes creates more pods than scale amount

I have encountered a strange situation in one of our clusters, where all of a sudden a number of new pods have been created so that we end up with a greater number of running pods than the scale amount.
So in the dashboard it will show
serviceX pods: 8/2
and then 8 running instances of that service
Questions
How can this possibly happen?
Is there an easy way to get rid
of the extra pods (which all seem to be running)?
I have tried changing the scale amount in the dashboard and the extra pods do not disappear.
Both Pod and deployment are full-fledged objects in the Kubernetes API. Deployment manages creating Pods by means of ReplicaSets. What it boils down to is that Deployment will create Pods with spec taken from the template.
In your case deployment name edgeservicepublic-svc is set to have 13 replicas. Deployment is a kind of controller in Kubernetes. Its is naturally that this controller with continuously check if 13 pods are created. When a deployment is added to the cluster, it will automatically spin up the requested number of pods, and then monitor them. If a pod dies, the deployment will automatically re-create it. Probably at first not enough pods are created co controller with pursue to achieve desried number of them.
To make sure your deployment works properly you can delete deployment, make sure that that pods are deleted. Make sure that you haven't set up autoscaler ( $ kubectl get hpa ) if so, delete it. Then if you want to change deployment specification edit deployment configuration file and apply changes ($ kubectl apply -f deployment_configuration_file.yaml).
Useful documentation about deployment , autoscaling in context of GKE.
EDIT:
Basically at first place check autoscaler then delete it if it exists. I told you to delete deployment because you told that you try to change scale amount/ number of replicas. So if you want to be 100 % sure that changes are applied is to delete whole deployment end then recreate it with desired number of replicas. Of course you can just apply changes in deployment configuration file ($ kubectl edit ...) or ( $ kubectl apply -f ) but sometimes existing pods are not deleted so it will be saver. You could also create new deployment with the same parameters but different name.