Set a list of acceped zones in Eureka - netflix-eureka

Is their a way to ignore zone or define list of accepted zone in Eureka, for example if we have 3 zones ( office, shahbour, joe )
I want the services in zone shahbour to only use services defined in shahbour primary and office as secondary and ignore all other zones in this example joe.
I tried it as below and it is working to prefer same zone but if their is no service on same zone it do load balance on all the others zones
spring:
profiles: shahbour
eureka:
instance:
metadataMap:
zone: shahbour
client:
region: lebanon
serviceUrl:
defaultZone: http://office:8761/eureka/
preferSameZoneEureka: true
availabilityZones:
lebanon: shahbour,office
I thought setting availabilityZones set this but it is not .
This is for development environment where i am trying to setup each developer to use his machine as a zone and if service does not exist use office server as backup but don't use other developers .

I did not find any place to set the accepted list of zones in eureka but what i found is that we can create our custom ServerListFilter in Ribbon that is used in both Feign and Zuul so below is the code
public class DevServerListFilter extends ZonePreferenceServerListFilter {
private final List<String> acceptedZone = new ArrayList<>();
public DevServerListFilter(String[] acceptedZones) {
for (String zone: acceptedZones) {
this.acceptedZone.add(zone);
}
}
#Override
public void initWithNiwsConfig(IClientConfig niwsClientConfig) {
super.initWithNiwsConfig(niwsClientConfig);
}
#Override
public List<Server> getFilteredListOfServers(List<Server> servers) {
List<Server> zoneAffinityFiltered = super.getFilteredListOfServers(servers);
Set<Server> candidates = Sets.newHashSet(zoneAffinityFiltered);
Iterator serverIterator = candidates.iterator();
while (serverIterator.hasNext()) {
Server server = (Server)serverIterator.next();
if(!acceptedZone.contains(server.getZone())) {
zoneAffinityFiltered.remove(server);
}
}
return zoneAffinityFiltered;
}
}
The above filter extends ZonePreferenceServerListFilter with a check on list of accepted zones , any server not in this list is ignored .
#Configuration
#RibbonClients(defaultConfiguration = MyDefaultRibbonConfiguration.class)
public class MyRibbonConfiguration {
}
Default configuration for all my clients
#Configuration
public class MyDefaultRibbonConfiguration {
// #Bean
// public IPing ribbonPing(IClientConfig config) {
// return new PingUrl();
// }
#Bean
public ServerListFilter<Server> ribbonServerListFilter(IClientConfig config,EurekaClientConfigBean eurekaClientConfigBean) {
String[] availabilityZones = eurekaClientConfigBean.getAvailabilityZones(eurekaClientConfigBean.getRegion());
DevServerListFilter filter = new DevServerListFilter(availabilityZones);
filter.initWithNiwsConfig(config);
return filter;
}
}
The Configuration Code , note that this has to be in an excluded path from #ComponentScan as request in documents , and i used the availability zone property but any list can be used.
github sample

Related

Spring data MongoDB change stream with multiple application instances

I have a springboot with   springdata  mongodb application where I am connecting to mongo change stream to save the changes to a audit collection.  My application is running multiple instances (2 instances) and will be scaled up to n number instances when the load increased.   When records are created in the original collection (“my collection”), the listeners will be triggered in all running instances and creates duplicate records.  Following is my setup
build.gradle
…
// spring data mingodb version 3.1.5
implementation 'org.springframework.boot:spring-boot-starter-data-mongodb'
…
Listener config
#Configuration
#Slf4j
public class MongoChangeStreamListenerConfig {
#Bean
MessageListenerContainer changeStreamListenerContainer(
MongoTemplate template,
PartyConsentAuditListener consentAuditListener,
ErrorHandler errorHandler) {
MessageListenerContainer messageListenerContainer =
new MongoStreamListenerContainer(template, errorHandler);
ChangeStreamRequest<PartyConsentEntity> request =
ChangeStreamRequest.builder(consentAuditListener)
.collection("my-collection")
.filter(newAggregation(match(where("operationType").in("insert", "update", "replace"))))
.fullDocumentLookup(FullDocument.UPDATE_LOOKUP)
.build();
messageListenerContainer.register(request, MyEntity.class, errorHandler);
log.info("mongo stream listener is registered");
return messageListenerContainer;
}
#Bean
ErrorHandler getLoggingErrorHandler() {
return new ErrorHandler() {
#Override
public void handleError(Throwable throwable) {
log.error("error in creating audit records {}", throwable);
}
};
}
}
Listener container
public class MongoStreamListenerContainer extends DefaultMessageListenerContainer {
public MongoStreamListenerContainer(MongoTemplate template, ErrorHandler errorHandler) {
super(template, Executors.newFixedThreadPool(15), errorHandler);
}
#Override
public boolean isAutoStartup() {
return true;
}
}
ChangeListener
#Component
#Slf4j
#RequiredArgsConstructor
/**
* This class will listen to mongodb change stream and process changes. The onMessage will triggered
* when a record added, updated or replaced in the mongo db.
*/
public class MyEntityAuditListener
implements MessageListener<ChangeStreamDocument<Document>, MyEntity> {
#Override
public void onMessage(Message<ChangeStreamDocument<Document>, MyEntity > message) {
var update = message.getBody();
log.info("db change event received");
if (update != null) {
log.info("creating audit entries for id {}", update.getId());
// This will execute in all the instances and creating duplicating records
}
}
}
Is there a way to control the execution on one instance at a given time and share the load between nodes?. It would be really nice to know if there is a config from spring data mongodb to control the flow.
Also, I have checked the following post in stack overflow and I am not sure how to use this with spring data.
Mongo Change Streams running multiple times (kind of): Node app running multiple instances
Any help or tip to resolve this issue is highly appreciated. Thank you very much in advance.

Different AuthenticationManager per path/route in spring security in MvC

Preamble
Since there are a lot of questions on StackOverflow about this already, I first want to ensure that this is not a duplicate and differentiate.
This is about
Having 2(or more) different AuthenticationProviders in 2 different AuthenticationManagers to be used on different routes.
Using the methods in Spring Security 5.5 not 3.x
Using a non XML configuration based approach
So the question is not about:
How to include several AuthenticationProvideres in on AuthenticationManager to offer "alternative authentications" (which most questions tend to be)
Case
Assume one has 2 custom AuthenticationProviders: CATApiTokenProvider and DOGApiTokenProvider. It is by design that we not talk about AOuth/JWT/Basic/Form providers, since they offer shortcuts.
Now we have 2 REST API endpoints /dog/endpoint and /cat/endpoint.
Question
How would one properly implement this today, with Spring Security 5.5:
We want the authentication provider CATApiTokenProvider to only be able to authenticate requests on /cat/endpoint
We want the authentication provider DOGApiTokenProvider to only be able to authenticate requests on /dog/endpoint
So one cannot authenticate with a cat token on /dog/endpoint and neither with a dog token on /cat/endpoint.
My Ideas/Approaches
a) I understand that since I have custom Cat/Dog filters, one can use the AuthenticationManagerResolver and pass one instance into the filter when creating the bean. This resolver might look like
public AuthenticationManagerResolver<HttpServletRequest> resolver()
{
return request -> {
if (request.getPathInfo().startsWith("/dog/")) {
try {
return ???;
} catch (Exception exception) {
log.error(exception);
}
}
if (request.getPathInfo().startsWith("/cat/")) {
try {
return ???;
} catch (Exception exception) {
log.error(exception);
}
}
};
}
Two questions with that would be:
how to return different authentication managers here? How to instantiate 2 different AM with each one CatAP and DogAP? Currently I use public void configure(AuthenticationManagerBuilder auth) but as far as I understand, I would only configure 'the one' AuthenticationManager and I could add DogAP and CatAP there, but this would let as having 1 AM with 2 APs, so when using this AM i could auth with the dog token on the cat endpoint
is this really the right way to implement this? I would have expected to be able to provide the AM on the SecurityConfiguration level
b) Somehow instantiate 2 different AuthenticationManagers and then use the SecurityConfiguration to assign them to different matchers.
Two questions:
what is the right way to spawn 2 different AMs with different providers?
I cannot understand how I would add an AM for a spec
http.authorizeRequests()
.antMatchers("/dog/**")
.?
You can either publish multiple filter chains or wire your own AuthenticationFilter with an AuthenticationManagerResolver
You may use AuthenticationManagerResolver to return different AuthenticationManagers. Since Spring Security 5.4.0, we don't need to extend the WebSecurityConfigurerAdapter to configure our SecurityFilterChain anymore, you can instead define a bean of SecurityFilterChain type.
I'll go into detail on wiring your own AuthenticationFilter.
#Configuration
#EnableWebSecurity
public class SecurityConfig {
#Bean
public SecurityFilterChain apiSecurity(HttpSecurity http) throws Exception {
http.authorizeHttpRequests((authz) -> authz
.anyRequest().authenticated());
http.addFilterBefore(apiAuthenticationFilter(), UsernamePasswordAuthenticationFilter.class);
return http.build();
}
private AuthenticationFilter apiAuthenticationFilter() {
AuthenticationFilter authenticationFilter = new AuthenticationFilter(new ApiAuthenticationManagerResolver(), new BasicAuthenticationConverter());
authenticationFilter.setSuccessHandler((request, response, authentication) -> {});
return authenticationFilter;
}
public static class ApiAuthenticationManagerResolver implements AuthenticationManagerResolver<HttpServletRequest> {
private final Map<RequestMatcher, AuthenticationManager> managers = Map.of(
new AntPathRequestMatcher("/dog/**"), new DogAuthenticationProvider()::authenticate,
new AntPathRequestMatcher("/cat/**"), new CatAuthenticationProvider()::authenticate
);
#Override
public AuthenticationManager resolve(HttpServletRequest request) {
for (Map.Entry<RequestMatcher, AuthenticationManager> entry : managers.entrySet()) {
if (entry.getKey().matches(request)) {
return entry.getValue();
}
}
throw new IllegalArgumentException("Unable to resolve AuthenticationManager");
}
}
public static class DogAuthenticationProvider implements AuthenticationProvider {
#Override
public Authentication authenticate(Authentication authentication) throws AuthenticationException {
if (authentication.getName().endsWith("_dog")) {
return new UsernamePasswordAuthenticationToken(authentication.getName(), authentication.getCredentials(),
AuthorityUtils.createAuthorityList("ROLE_DOG"));
}
throw new BadCredentialsException("Username should end with _dog");
}
#Override
public boolean supports(Class<?> authentication) {
return true;
}
}
public static class CatAuthenticationProvider implements AuthenticationProvider {
#Override
public Authentication authenticate(Authentication authentication) throws AuthenticationException {
if (authentication.getName().endsWith("_cat")) {
return new UsernamePasswordAuthenticationToken(authentication.getName(), authentication.getCredentials(),
AuthorityUtils.createAuthorityList("ROLE_CAT"));
}
throw new BadCredentialsException("Username should end with _cat");
}
#Override
public boolean supports(Class<?> authentication) {
return true;
}
}
}
In the example above, we have two AuthenticationProviders, one for cat and other for dog. They are resolved upon an AntPathRequestMatcher matching for both /dog/** and /cat/** endpoints, inside the ApiAuthenticationManagerResolver. There is no need to defined an AuthenticationManager for each dog and cat, since AuthenticationProvider/Manager have the same interface.
The ApiAuthenticationManagerResolver is then wired inside an AuthenticationFilter in your filter chain.
You can also define two different filter chains for each endpoint, like so:
#Bean
public SecurityFilterChain dogApiSecurity(HttpSecurity http) throws Exception {
http.requestMatchers((matchers) -> matchers
.antMatchers("/dog/**"));
http.authorizeRequests((authz) -> authz
.anyRequest().authenticated());
http.httpBasic();
http.authenticationProvider(new DogAuthenticationProvider());
return http.build();
}
#Bean
public SecurityFilterChain catApiSecurity(HttpSecurity http) throws Exception {
http.requestMatchers((matchers) -> matchers
.antMatchers("/cat/**"));
http.authorizeRequests((authz) -> authz
.anyRequest().authenticated());
http.httpBasic();
http.authenticationProvider(new CatAuthenticationProvider());
return http.build();
}
Please, when defining multiple filter chains, the ordering is important, make use of the #Order annotation in those scenarios.
When you do http.requestMatcher(new AntPathRequestMatcher("/endpoint/**")); you are telling Spring Security to only call the filter chain when the request matches that path.
There is also a ticket within Spring Security's repository to provide a AuthenticationManagerResolver implementation which accepts Map<RequestMatcher, AuthenticationManager>, it would be nice if you think it makes sense, give a thumbs up there.

Liferay create a custom PanelCategory for multiple portlet

I use Liferay 7.2 and Liferay IDE (eclipse). I created two separate Liferay admin portlet to creating a view for the database entries. I added in the first portlet "Teachers" a new panel called school with this generated code in application.list Package.
here is the code of - PanelApp.java
#Component(
immediate = true,
property = {
"panel.app.order:Integer=300",
"panel.category.key=" + TeachersPanelCategoryKeys.CONTROL_PANEL_CATEGORY
},
service = PanelApp.class
)
public class TeachersPanelApp extends BasePanelApp {
#Override
public String getPortletId() {
return TeachersPortletKeys.TEACHERS;
}
#Override
#Reference(
target = "(javax.portlet.name=" + TeachersPortletKeys.TEACHERS+ ")",
unbind = "-"
)
public void setPortlet(Portlet portlet) {
super.setPortlet(portlet);
}
}
.
public class TeachersPanelCategoryKeys {
public static final String CONTROL_PANEL_CATEGORY = "Teachers";
}
And here is the code of - PanelCategory.java
#Component(
immediate = true,
property = {
"panel.category.key=" + PanelCategoryKeys.SITE_ADMINISTRATION,
"panel.category.order:Integer=300"
},
service = PanelCategory.class)
public class TeachersPanelCategory extends BasePanelCategory {
#Override
public String getKey() {
return TeachersPanelCategoryKeys.CONTROL_PANEL_CATEGORY;
}
#Override
public String getLabel(Locale locale) {
return LanguageUtil.get(locale, "School");
}}
And here is the code of portlet.java
#Component(
immediate = true,
property = {
"com.liferay.portlet.add-default-resource=true",
"com.liferay.portlet.display-category=category.hidden",
"com.liferay.portlet.header-portlet-css=/css/main.css",
"com.liferay.portlet.layout-cacheable=true",
"com.liferay.portlet.private-request-attributes=false",
"com.liferay.portlet.private-session-attributes=false",
"com.liferay.portlet.render-weight=50",
"com.liferay.portlet.use-default-template=true",
"javax.portlet.display-name=Teachers",
"javax.portlet.expiration-cache=0",
"javax.portlet.init-param.template-path=/",
"javax.portlet.init-param.view-template=/view.jsp",
"javax.portlet.name=" + TeachersPortletKeys.TEACHERS,
"javax.portlet.resource-bundle=content.Language",
"javax.portlet.security-role-ref=power-user,user",
},
service = Portlet.class
)
public class TeachersPortlet extends MVCPortlet {
// some code to get entries from db
#Override
public void doView(final RenderRequest renderRequest, final RenderResponse renderResponse)
throws IOException, PortletException {
// some code
Now I want to add the second created portlet "Students" under the same Panel "School". I created it in the same way as "Teachers" but now I have two school panel. As it is shown in the image below.
I just want to display one panel category called school that contain both Teachers and Students in the list.
I do not know how I can think to do that.
As you're implementing a TeachersPanelCategory, I'm assuming you're also implementing a StudentsPanelCategory. From a naming perspective, I'd have expected a SchoolPanelCategory.
I'm currently at a loss of how the ControlPanel portlets actually declare their associated panel, but that place would be where you pick the common "school" panel and use the same spelling for both.
In other words: If you deploy two panels with the same name, I'd expect exactly what you document here. Make sure you're only deploying one of them
Edit: I'd like to know what TeachersPanelCategoryKeys.CONTROL_PANEL_CATEGORY is defined as, and the corresponding (assumed, not shown) StudentsPanelCategoryKeys.CONTROL_PANEL_CATEGORY. Both categories have the same label, but if they have different keys, they'll be different. I'm not sure what happens when you deploy two components with the same key: You should deploy only one.
Edit2: I've missed the code before: You're producing the key to your first category as "Teachers", and the label as "School". I'm assuming that the key for your other category is "Students". Liferay organizes the categories by key - and if the keys are different, then you'll end up with two different categories. Make their key more similar to their name (e.g. create a single SchoolCategory and associate your portlets/panelApps with that:
#Component(
immediate = true,
property = {
"panel.category.key=" + PanelCategoryKeys.SITE_ADMINISTRATION,
"panel.category.order:Integer=300"
},
service = PanelCategory.class)
public class SchoolPanelCategory extends BasePanelCategory {
#Override
public String getKey() {
return "school"; // this is the category that you want to associate with
}
#Override
public String getLabel(Locale locale) {
return LanguageUtil.get(locale, "School");
}
}
and
#Component(
immediate = true,
property = {
"panel.app.order:Integer=300",
"panel.category.key=school" // referencing the category created above
// (use the same for your StudentsPanelApp)
},
service = PanelApp.class
)
public class TeachersPanelApp extends BasePanelApp {
#Override
public String getPortletId() {
return TeachersPortletKeys.TEACHERS;
}
#Override
#Reference(
target = "(javax.portlet.name=" + TeachersPortletKeys.TEACHERS+ ")",
unbind = "-"
)
public void setPortlet(Portlet portlet) {
super.setPortlet(portlet);
}
}
(See the one-line comments within the code for the critical lines. Replace with proper constants if you like)

How to use storm's New Metrics Reporting API?

I use the storm V1.2.1. After setting up according to the official documentation, I want to get some metrics in the spout, the spout code is as follows, but there is no expected metric data in graphite-web.
Question 1:How to use the New Metrics Reporting API correctly?
Question 2:How do I get the ACK number metric in the storm-bound KafkaSpout by using Storm's Old or New Metrics API?
Using New API in the spout to get the number of the tuple:
public static class MyTestWordSpout extends BaseRichSpout {
public static Logger LOG = LoggerFactory.getLogger(TestWordSpout.class);
boolean _isDistributed;
SpoutOutputCollector _collector;
private Counter tupleCounter;
transient CountMetric ackcountMetric;
long msid=0;
public void open(Map conf, TopologyContext context, SpoutOutputCollector collector) {
_collector = collector;
this.tupleCounter = context.registerCounter("tupleCount");
ackcountMetric = new CountMetric();
context.registerMetric("ack_count", ackcountMetric, 5);
}
public void close() {
}
public void nextTuple() {
Utils.sleep(100);
final String[] words = new String[] {"nathan", "mike", "jackson", "golda", "bertels"};
final Random rand = new Random();
final String word = words[rand.nextInt(words.length)];
_collector.emit(new Values(word),msid++);
this.tupleCounter.inc();
}
public void ack(Object msgId) {
ackcountMetric.incr();
}
public void fail(Object msgId) {
}
public void declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declare(new Fields("word"));
}
}
storm.yaml:
storm.metrics.reporters:
# Graphite Reporter
- class: "org.apache.storm.metrics2.reporters.GraphiteStormReporter"
daemons:
- "supervisor"
- "nimbus"
- "worker"
report.period: 1
report.period.units: "SECONDS"
graphite.host: "10.11.6.79"
graphite.port: 2003
- class: "org.apache.storm.metrics2.reporters.ConsoleStormReporter"
daemons:
- "worker"
report.period: 1
report.period.units: "SECONDS"
graphite browser:
graphite browser
you can use this library https://github.com/staslev/storm-metrics-reporter. Add this to your pom.xml
<dependency>
<groupId>com.github.staslev</groupId>
<artifactId>storm-metrics-reporter</artifactId>
<version>1.5.0</version>
</dependency>
Add this configuration to your topology:
config.put(YammerFacadeMetric.FACADE_METRIC_TIME_BUCKET_IN_SEC, 30);
config.put(SimpleGraphiteStormMetricProcessor.GRAPHITE_HOST, "127.0.0.1");
config.put(SimpleGraphiteStormMetricProcessor.GRAPHITE_PORT, 2003);
config.put(SimpleGraphiteStormMetricProcessor.REPORT_PERIOD_IN_SEC, 10);
config.put(Config.TOPOLOGY_NAME, YOUR-TOPOLOGY.class.getCanonicalName());
config.registerMetricsConsumer(MetricReporter.class,
new MetricReporterConfig(".*", SimpleGraphiteStormMetricProcessor.class.getCanonicalName()), 1);
And add the following call into the prepare method of your bolts:
public void prepare(Map stormConf, TopologyContext context, OutputCollector collector) {
StormYammerMetricsAdapter.configure(stormConf, context, new MetricsRegistry());
Then you can see at your browser if Graphite shows the metrics.

Custom Principal won't be propagated to EJB SessionContext on Jboss AS

In a EJB project, I need to replace the call princial name in "javax.ejb.SessionContext". I use Jboss AS 6.0 Final as the application server.
I defined a custom UserLoginModule that extends UsernamePasswordLoginModule and added a custom principal, but my custom principal won't be propagated to EJB SessionContext.
Here is some code from my custom login module:
#Override
protected Group[] getRoleSets() throws LoginException {
Group[] groups = new Group[2];
groups[0] = new SimpleGroup("Roles");
groups[0].addMember(createRoleIdentity());
Group callerPrincipal = new SimpleGroup("CallerPrincipal");
callerPrincipal.addMember(createIdentity(this.getUsername()));
groups[1] = callerPrincipal;
subject.getPrincipals().add(callerPrincipal);
return groups;
}
#Override
protected Principal createIdentity(String username) throws LoginException {
return new MyCustomPrincipal(username);
}
}
My custom login module works well, but the caller principal I get from "javax.ejb.SessionContext" is still SimplePrincipal.
It turned out that there is a Jobss bug: EJBContext.getCallerPrincipal() is not returning custom principal https://issues.jboss.org/browse/JBAS-8427
And a related topic: http://community.jboss.org/thread/44388.
I wonder if you have some experiences on this and is it safe to replace the default principal Jboss creates? Are ther any side effects?
With the help of my team, I got a solution, hope this can be helpful to those who have the same problem.
Instead of "sessionContext.getCallerPrincipal()"
Use the following to get the custom principal:
try {
Subject subject = (Subject) PolicyContext.getContext("javax.security.auth.Subject.container");
Set<Group> subjectGroups = subject.getPrincipals(Group.class);
Iterator<Group> iter = subjectGroups.iterator();
while (iter.hasNext()) {
Group group = iter.next();
String name = group.getName();
if (name.equals("CallerPrincipal")) {
Enumeration<? extends Principal> members = group.members();
if (members.hasMoreElements()) {
Principal principal = (Principal) members.nextElement();
return principal;
}
}
}
}
} catch (PolicyContextException e1) {
...
}