Need help in usage of Login volume with iscsi - iscsi

Need a command line argument to login the volume with persistent login i tried with follwing command. but not working.
iscsicli persistentlogintarget iqn.2003-10.com.lefthandnetworks:mg-test:51:volume " T * * * * * * * * * * * * * * * 0"
Please help
Regards
NewDev

cmd>iscsicli persistentlogintarget
TargetName T * * * * * * * * * * * * * * * 0
The output was:
Microsoft iSCSI Initiator Version 6.0 Build 6000
The operation completed successfully.
To confirm that the operation was successful, I entered:
cmd>iscsicli listpersistenttargets
Id it doesnt list the targets then there is some problem.Try qlogintarget to check if it is succesfull.

Related

prettier formats ternary statement weird

Prettier in vsc formats my ternary statement from this:
(req.body.remember_me)
? req.session.cookie.maxAge = 1000 * 60 * 60 * 24 * 30
: req.session.cookie.maxAge = null
to that:
req.body.remember_me
? (req.session.cookie.maxAge = 1000 * 60 * 60 * 24 * 30)
: (req.session.cookie.maxAge = null)
is there any chance to disable or change this approach? or is it good?

Does return table defined inside a stored procedure creates a temporary table?

There is a stored procedure defined, which takes some parameters,it runs a loop inside it. The issue is that whenever I am executing this query, this single process/function takes 100% CPU usage and approx 2 mins to get executed,I have increased the number of cores but it still takes 100% CPU usage. I am thinking that maybe it is creating temporary tables and Postgresql is unable to autovaccum.
I have tried several configurations, but to no avail.
create
or replace
function some_function(offers numeric[],
list character varying[],
startdate date,
enddate date) returns table
(prod character varying,
offer numeric,
dates date,
baseline double precision,
promoincremental double precision,
couponincremental double precision,
affinityquantity double precision,
affinitymargin double precision,
affinityrevenue double precision,
cannibalisationquantity double precision,
cannibalisationrevenue double precision,
cannibalisationmargin double precision
) language plpgsql as $function$ declare a numeric[] := offers;
i numeric;
begin
foreach i in array a loop return QUERY select
models.prod,
i as offer,
date(models.transdate_dt) as dates,
greatest(0,(sum(models.unitretailprice) * sum(coefficients.unit_retail_price)) + (sum(models.flag::int) * sum(coefficients.flag::int)) + (sum(models.mc_baseline) * sum(coefficients.mc_baseline)) + (sum(models.mc_day_avg) * sum(coefficients.mc_day_avg)) + (sum(models.mc_day_normal) * sum(coefficients.mc_day_normal)) + (sum(models.mc_week_avg) * sum(coefficients.mc_week_avg)) + (sum(models.mc_week_normal) * sum(coefficients.mc_week_normal)) + (sum(models.sku_day_avg) * sum(coefficients.sku_day_avg)) + (sum(models.sku_month_avg) * sum(coefficients.sku_month_avg)) + (sum(models.sku_month_normal)* sum(coefficients.sku_month_normal)) + (sum(models.sku_moving_avg) * sum(coefficients.sku_moving_avg)) + (sum(models.sku_week_avg) * sum(coefficients.sku_week_avg)) + (sum(models.sku_week_normal)* sum(coefficients.sku_week_normal))) as baseline,
greatest(0, ((0 * sum(coefficients.f)) + (0 * sum(coefficients.p)) + (i * sum(coefficients.a)))) as promoIncremental,
greatest(0, (sum(models.basket_dollar_off) * sum(coefficients.basket_dollar_off)) + (sum(models.basket_per_off) * sum(coefficients.basket_per_off)) + (sum(models.category_dollar_off) * sum(coefficients.category_dollar_off)) + (sum(models.category_per_off) * sum(coefficients.category_per_off)) + (sum(models.disc_per) * sum(coefficients.disc_per))) as couponIncremnetal,
greatest(0, (0 * sum(coefficients.f)) + (0 * sum(coefficients.p)) + (i * sum(coefficients.a))) * sum(affinity.pull) * sum(affinity.confidence) as affinityquantity,
greatest(0, (0 * sum(coefficients.f)) + (0 * sum(coefficients.p)) + (i * sum(coefficients.a))) * sum(affinity.margin_lift) * sum(affinity.confidence) as affinitymargin,
greatest(0, (0 * sum(coefficients.f)) + (0 * sum(coefficients.p)) + (i * sum(coefficients.a))) * sum(affinity.revenue_lift) * sum(affinity.confidence) as affinityrevenue,
greatest(0, (0 * sum(coefficients.f)) + (0 * sum(coefficients.p)) + (i * sum(coefficients.a)))/(NULLIF( greatest(0,(sum(models.unitretailprice) * sum(coefficients.unit_retail_price)) + (sum(models.flag::int) * sum(coefficients.flag::int)) + (sum(models.mc_baseline) * sum(coefficients.mc_baseline)) + (sum(models.mc_day_avg) * sum(coefficients.mc_day_avg)) + (sum(models.mc_day_normal) * sum(coefficients.mc_day_normal)) + (sum(models.mc_week_avg) * sum(coefficients.mc_week_avg)) + (sum(models.mc_week_normal) * sum(coefficients.mc_week_normal)) + (sum(models.sku_day_avg) * sum(coefficients.sku_day_avg)) + (sum(models.sku_month_avg) * sum(coefficients.sku_month_avg)) + (sum(models.sku_month_normal)* sum(coefficients.sku_month_normal)) + (sum(models.sku_moving_avg) * sum(coefficients.sku_moving_avg)) + (sum(models.sku_week_avg) * sum(coefficients.sku_week_avg)) + (sum(models.sku_week_normal)* sum(coefficients.sku_week_normal))) + greatest(0, (0 * sum(coefficients.f)) + (0 * sum(coefficients.p)) + (i * sum(coefficients.a))) + greatest(0, (sum(models.basket_dollar_off) * sum(coefficients.basket_dollar_off)) + (sum(models.basket_per_off) * sum(coefficients.basket_per_off)) + (sum(models.category_dollar_off) * sum(coefficients.category_dollar_off)) + (sum(models.category_per_off) * sum(coefficients.category_per_off)) + (sum(models.disc_per) * sum(coefficients.disc_per))), 0)) * sum(cannibalisation.effect) * (sum(products.price) - (sum(products.price) - i )) as cannibalisationquantity,
((sum(products.price) - i) * sum(models.disc_per) - (sum(products.price) - i) ) * ( greatest(0, (0 * sum(coefficients.f)) + (0 * sum(coefficients.p)) + (i * sum(coefficients.a)))/(NULLIF( greatest(0,(sum(models.unitretailprice) * sum(coefficients.unit_retail_price)) + (sum(models.flag::int) * sum(coefficients.flag::int)) + (sum(models.mc_baseline) * sum(coefficients.mc_baseline)) + (sum(models.mc_day_avg) * sum(coefficients.mc_day_avg)) + (sum(models.mc_day_normal) * sum(coefficients.mc_day_normal)) + (sum(models.mc_week_avg) * sum(coefficients.mc_week_avg)) + (sum(models.mc_week_normal) * sum(coefficients.mc_week_normal)) + (sum(models.sku_day_avg) * sum(coefficients.sku_day_avg)) + (sum(models.sku_month_avg) * sum(coefficients.sku_month_avg)) + (sum(models.sku_month_normal)* sum(coefficients.sku_month_normal)) + (sum(models.sku_moving_avg) * sum(coefficients.sku_moving_avg)) + (sum(models.sku_week_avg) * sum(coefficients.sku_week_avg)) + (sum(models.sku_week_normal)* sum(coefficients.sku_week_normal))) + greatest(0, (0 * sum(coefficients.f)) + (0 * sum(coefficients.p)) + (i * sum(coefficients.a))) + greatest(0, (sum(models.basket_dollar_off) * sum(coefficients.basket_dollar_off)) + (sum(models.basket_per_off) * sum(coefficients.basket_per_off)) + (sum(models.category_dollar_off) * sum(coefficients.category_dollar_off)) + (sum(models.category_per_off) * sum(coefficients.category_per_off)) + (sum(models.disc_per) * sum(coefficients.disc_per))), 0)) * sum(cannibalisation.effect) * (sum(products.price) - (sum(products.price) - i ) )) as cannibalisationrevenue,
(((sum(products.price) - i) * sum(models.disc_per) - (sum(products.price) - i) ) - sum(products.cost)) * ( greatest(0, (0 * sum(coefficients.f)) + (0 * sum(coefficients.p)) + (i * sum(coefficients.a)))/(NULLIF(greatest(0,(sum(models.unitretailprice) * sum(coefficients.unit_retail_price)) + (sum(models.flag::int) * sum(coefficients.flag::int)) + (sum(models.mc_baseline) * sum(coefficients.mc_baseline)) + (sum(models.mc_day_avg) * sum(coefficients.mc_day_avg)) + (sum(models.mc_day_normal) * sum(coefficients.mc_day_normal)) + (sum(models.mc_week_avg) * sum(coefficients.mc_week_avg)) + (sum(models.mc_week_normal) * sum(coefficients.mc_week_normal)) + (sum(models.sku_day_avg) * sum(coefficients.sku_day_avg)) + (sum(models.sku_month_avg) * sum(coefficients.sku_month_avg)) + (sum(models.sku_month_normal)* sum(coefficients.sku_month_normal)) + (sum(models.sku_moving_avg) * sum(coefficients.sku_moving_avg)) + (sum(models.sku_week_avg) * sum(coefficients.sku_week_avg)) + (sum(models.sku_week_normal)* sum(coefficients.sku_week_normal))) + greatest(0, (0 * sum(coefficients.f)) + (0 * sum(coefficients.p)) + (i * sum(coefficients.a))) + greatest(0, (sum(models.basket_dollar_off) * sum(coefficients.basket_dollar_off)) + (sum(models.basket_per_off) * sum(coefficients.basket_per_off)) + (sum(models.category_dollar_off) * sum(coefficients.category_dollar_off)) + (sum(models.category_per_off) * sum(coefficients.category_per_off)) + (sum(models.disc_per) * sum(coefficients.disc_per))), 0)) * sum(cannibalisation.effect) * (sum(products.price) - (sum(products.price) - i ) )) as cannibalisationmargin
from
models
join coefficients on
models.prod = coefficients.prod
and models.si_type = coefficients.si_type
and models.model_type = coefficients.model_type
left join products on
products.unique_id1 = models.prod
left join affinity on
affinity.prod = models.prod
left join cannibalisation on
cannibalisation.prod = models.prod
where
coefficients.prod = any(skusList)
and models.transdate_dt >= startDate
and models.transdate_dt <= endDate
group by
models.prod,
dates,
offer ;
end loop;
end;
$function$ ;
Here I call the function with these parameters:::::
select
*
from
some_function(array[1.5,
2.5, 3.5, 4.5, 5.5, 6.5, 7.5, 8.5, 9.5, 10.5, 11.5, 12.5, 13.5, 14.5, 15.5, 16.5 ],
'{12841276, 11873916,07915473, 01504273,10405843,11231446,12224242,11249380,08604365, 11248952, 11230018,10447621,11229820,10406916,09578733,11280161, 01503697, 11923554, 10406460,11295219,01458421,09053893,11224409, 06755789, 11317377, 11275047,12231817,11309507,10447522,10406296, 10406338, 01460658, 12272811,11318870,11248838,10406130,11248812, 11223682, 11276748, 10447605, 11232451, 10405827,10447670,08177743, 10405231, 02326791, 12224226,12231650,11929197,01504380 }',
'2018-01-02'::date,
'2018-01-06'::date );
I expected it to take around 30% CPU usage at a max because it is performing some calculations but it is taking down 100% CPU usage, Can anyone kindly help in this regard?

How do I calculate limit of Substitutions per personalization block in SendGrid?

I'm trying to send an email with SendGrid using a transaction template, and see a problem with substitution limited.
Substitutions are limited to 10000 bytes per personalization block.
SendGrid Docs
I will resolved this problem by get the template from SendGrid and replace all placeholders inside it. But I need to check when I can do it, I think will check limited of substitutions and I don't know How SendGrid calculate it? maybe length of all string in substitutions?
Any help?
Thanks!
I resolved my problem and share the solution for the people who care ;)
Solution:
You need to optimize Substitutions by using it together with Sections
/**
* When you have the $variables too big, SendGrid will reject the message because
* SendGrid's Substitution is limited(10000 bytes)
*
* Before that, the format of substitution as below:
*
* {
* "to": [
* "example#example.com"
* ],
* "sub": {
* "%FirstName%": "This string maybe too long",
* "%LastName%": "This string maybe too long",
* ....
* ....
* more and limited
* }
* }
*
* To resolved this problem we will use the Section with Substitution as below:
*
* {
* "to": [
* "example#example.com"
* ],
* "sub": {
* "%FirstName%": "%FirstName%",
* "%LastName%": "%LastName%",
* ....
* ....
* more and more
* },
* "section": {
* "%FirstName%": "This string maybe too long",
* "%LastName%": "This string maybe too long",
* ....
* ....
* }
* }
*
* We will optimize the strings in Substitutions and let Section in SendGrid to contain that strings.
*/
As per sendgrid documentation they are going to remove support for Section. As they says Substitution is alternative for Section. Do you have any other alternative for large content (> 10000 bytes).

What is the difference between querying by distance and using the earthdistance module in postgres?

In postgres, there are two ways that I know of to query based on distance.
The first is "querying by distance" using a particular algorithm (as seen here http://daynebatten.com/2015/09/latitude-longitude-distance-sql/).
SELECT zcta.*,
3958.755864232 * 2 *
ASIN(SQRT(POWER(SIN((41.318301 - zcta.latitude) *
PI() / 180 / 2), 2) + COS(41.318301 * PI() / 180) *
COS(zcta.latitude * PI() / 180) *
POWER(SIN((-83.6174935 - zcta.longitude) *
PI() / 180 / 2), 2))) AS distance,
MOD(CAST((ATAN2( ((zcta.longitude - -83.6174935) / 57.2957795),
((zcta.latitude - 41.318301) / 57.2957795)) *
57.2957795) + 360 AS decimal), 360) AS bearing
FROM "zcta"
WHERE (zcta.latitude BETWEEN 40.59464208444576
AND 42.04195991555424
AND zcta.longitude BETWEEN -84.58101890178294
AND -82.65396809821705
AND (3958.755864232 * 2 * ASIN(SQRT(POWER(SIN((41.318301 - zcta.latitude) *
PI() / 180 / 2), 2) + COS(41.318301 * PI() / 180) * COS(zcta.latitude *
PI() / 180) * POWER(SIN((-83.6174935 - zcta.longitude) *
PI() / 180 / 2), 2))))
BETWEEN 0.0
AND 50)
ORDER BY distance ASC
The second is the earthdistance module (https://www.postgresql.org/docs/8.3/static/earthdistance.html) for geospatial queries.
select *
from zcta
where earth_box(ll_to_earth(41.318301, -83.6174935), 63067.2) #>
ll_to_earth(zcta.latitude, zcta.longitude)
What is the difference here? Which is better to use? Which is more accurate? How do each work?
A bit late answer, but my addition. The first is a roll-your-own solution, whereas the latter is a Postgresql extension or module called earthdistance. To install, in psql:
CREATE EXTENSION IF NOT EXISTS earthdistance;
You might need to do the same for cube first as they share resources. Or...
CREATE EXTENSION IF NOT EXISTS earthdistance CASCADE;
Here is the documentation.

Quartz schedule except specific time

Is it possible to create a CronExpression with: "fire every 5 min but not run at 00:05 and 00:10"?
org.quartz.CronScheduleBuilder.cronSchedule("0 0/5 * * * ?")
You will have to use two expressions:
0 15/5 0 * * ?
0 0/5 1-23 * * ?
The first expression is specific to 12 AM, the second is for the rest of the time.