How can I determine which security manager is active on z/OS using Java? - zos

I am writing Java code on z/OS and I need to find out which security manager (RACF, ACF2 or TopSecret) is active on the system. How can I do this?

You can use the IBM JZOS package to peek at memory as follows. For production code, I would create an enumeration for the security managers and rather than pass strings around and have to deal with string comparisons.
import com.ibm.jzos.ZUtil;
/**
* This is a sample program that uses IBM JZOS to determine
* the Enterprise Security Manager that is active on a z/OS
* system.
* <p>
* #see com.ibm.jzos.ZUtil#peekOSMemory(long, int)
* #see com.ibm.jzos.ZUtil#peekOSMemory(long, byte[])
*/
public class peek {
public static void main(String[] args) throws Exception {
byte[] rcvtIdBytes = new byte[4];
long pPSA = 0L;
int psaOffsetCVT = 16;
long pCVT = ZUtil.peekOSMemory(pPSA + psaOffsetCVT, 4); // Get address of CVT from PSA+16
int cvtOffsetCVTRAC = 0x3e0; // Offset of CVTRAC (#RCVT) in the CVT
long pCVTRAC =
ZUtil.peekOSMemory(pCVT + cvtOffsetCVTRAC, 4); // Get the address of CVTRAC (Mapped by ICHPRCVT)
// Now we can retrieve the 4 byte ID (in IBM-1047) of the active ESM.
int cvtracOffsetRCVTID = 0x45; // Offset of RCVTID in the RCVT
ZUtil.peekOSMemory(pCVTRAC + cvtracOffsetRCVTID, rcvtIdBytes); // Get the RCVTID
String rcvtId = new String(rcvtIdBytes, "IBM-1047");
System.out.println("The Security Manager is: "+rcvtId);
}
}

Related

How to get External and Internal storage directory path

I need to get External and Internal storage directory path to find it's size and I am not able to get the path. In android we have
android.os.Enviroment.getExternalStorageDirectory()
From official documents of HarmonyOS - Internal storage and External storage
You can create a utils class in your project and use the following functions to get the internal and external storage paths:
/**
* Returns the absolute path to the directory of the device's internal storage
*
* #param context
* #return
*/
public static File getInternalStorage(Context context) {
return context.getFilesDir(); //Can be called directly too
}
/**
* Returns the absolute path to the directory of the device's primary shared/external storage
*
* #param context
* #return
*/
public static File getExternalStorage(Context context) {
File externalFilesDirPath = context.getExternalFilesDir(Environment.DIRECTORY_DOWNLOADS);
String externalStoragePath = "";
int subPathIndex = externalFilesDirPath.getAbsolutePath().indexOf("/emulated/0/");
if (subPathIndex > 0) {
subPathIndex += "/emulated/0/".length();
}
if (subPathIndex >= 0 && externalFilesDirPath.getAbsolutePath().contains("/storage/")) {
externalStoragePath = externalFilesDirPath.getAbsolutePath().substring(0, subPathIndex);
}
if (externalStoragePath.length() > 0) {
externalFilesDirPath = new File(externalStoragePath);
}
return externalFilesDirPath;
}
Once you obtain File object, you can call the following functions to get the storage information -
getTotalSpace(): Returns the size of the partition named by this
abstract pathname.
getUsableSpace​(): Returns the number of bytes available to this virtual machine on the partition named by this abstract pathname.
getFreeSpace(): Returns the number of unallocated bytes in the partition named by this abstract pathname.

Lumen Database Queue first job always failing Allowed memory exhausted

I have a very odd situation where I set up a job to run in my Lumen database queue and all but the first job is processed. I do keep getting this particular error:
[2017-12-12 22:07:10] lumen.ERROR: Symfony\Component\Debug\Exception\FatalErrorException: Allowed memory size of 1073741824 bytes exhausted (tried to allocate 702558208 bytes) in /var/www/vhosts/XXXXXXXXX$
Stack trace:
#0 /var/www/vhosts/XXXXXXXX/vendor/laravel/lumen-framework/src/Concerns/RegistersExceptionHandlers.php(54): Laravel\Lumen\Application->handleShutdown()
#1 [internal function]: Laravel\Lumen\Application->Laravel\Lumen\Concerns\{closure}()
#2 {main}
I have tried allowing the memory limit to go up but I keep getting the same error with differing values for the exhausted memory.
I find it very odd that it is always the first job and all of the rest of the jobs run perfectly fine. Should I be looking for bad data in the first job?
My code basically looks like this:
This is my Command file
namespace App\Console\Commands;
use App\Jobs\UpdateNNNAppListJob;
use Illuminate\Console\Command;
use App\Services\MiddlewareApi;
use Illuminate\Support\Facades\DB;
use Illuminate\Support\Facades\Log;
use Mockery\Exception;
use Illuminate\Support\Facades\Queue;
class AddEmailsToAppList extends Command
{
/**
* The name and signature of the console command.
*
* #var string
*/
protected $signature = 'addemails:nnnmobileapp';
/**
* The console command description.
*
* #var string
*/
protected $description = 'This will add all mobile app users in the database to the nnn mobile app list.';
/**
* Create a new command instance.
*
* #return void
*/
public function __construct()
{
parent::__construct();
}
public function handle()
{
$chunkSize = 500; //this is the most middleware can handle with its bulk signup call
$emailChunks = $this->getEmailsToAdd($chunkSize);
$jobDelay = 120; //time between queued jobs
$jobDelayTimeKeeper = 60; //This will be the actual time delay that will be put into the later method
foreach ($emailChunks as $emailChunk) {
Queue::later($jobDelayTimeKeeper, new UpdateMmpAppListJob($emailChunk));
$jobDelayTimeKeeper = $jobDelayTimeKeeper + $jobDelay;
}
}
public function getEmailsToAdd($chunkSize)
{
$emails = DB::table('app_users')
->join('app_datas', 'app_datas.customer_number', '=', 'app_users.customer_number')
->select('app_users.email')
->get()
->chunk($chunkSize);
return $emails;
}
}
Here is my Job File
<?php
namespace App\Jobs;
use App\Services\MiddlewareApi;
use Illuminate\Support\Facades\Log;
use Mockery\Exception;
class UpdateMmpAppListJob extends Job
{
/**
* Array of emails to update list with
* #var array
*/
protected $emailArray;
/**
* The number of times the job may be attempted.
*
* #var int
*/
public $tries = 2;
public function __construct($emailArray)
{
$this->emailArray = $emailArray;
}
public function handle()
{
$listCodeToAddTo = 'NNNAPP';
$sourceId = 'NNNNNNN';
$middlewareApi = new MiddlewareApi();
try {
$middlewareApi->post_add_customer_signup_bulk($listCodeToAddTo, $this->emailArray, $sourceId);
} catch (\Exception $e) {
Log::error('An error occurred with theUpdateMmpAppListJob: ' . $e);
mail('djarrin#NNN.com', 'UpdateNnnAppListJob Failure', 'A failure in the UpdateNnnAppListJob, here is the exception: ' . $e);
}
}
public function failed(\Exception $exception)
{
mail('djarrin#moneymappress.com', 'Push Processor Que Failure', 'A failure in the UpdateMmpAppListJob, here is the exception: ' . $exception);
}
}
Any help/suggestions on this issue would be appreciate.
Your code calls ->get() which will load the entire result into memory. This causes the huge memory allocation that you're seeing. Remove it and let ->chunk(...) work with the database builder instead of the in-memory Collection that get() has returned. You would also have to provide a callback to chunk that processes every chunk.
public function handle() {
$chunkSize = 500; //this is the most middleware can handle with its bulk signup call
$jobDelay = 120; //time between queued jobs
$jobDelayTimeKeeper = 60; //This will be the actual time delay that will be put into the later method
DB::table('app_users')
->join('app_datas', 'app_datas.customer_number', '=', 'app_users.customer_number')
->select('app_users.email')
->chunk($chunkSize, function($emailChunk) use (&$jobDelayTimeKeeper, $jobDelay) {
Queue::later($jobDelayTimeKeeper, new UpdateMmpAppListJob($emailChunk));
$jobDelayTimeKeeper = $jobDelayTimeKeeper + $jobDelay;
});
}
The above concept is correct but this syntax was required to get past the
[2017-12-14 22:08:26] lumen.ERROR: RuntimeException: You must specify an orderBy clause when using this function. in /home/vagrant/sites/nnn/vendor/illuminate/database/Query/Builder.php:1877
This is for Lumen 5.5:
public function handle()
{
$chunkSize = 500; //this is the most middleware can handle with its bulk signup call
$jobDelay = 120; //time between queued jobs
$jobDelayTimeKeeper = 60; //This will be the actual time delay that will be put into the later method
$emails = DB::table('app_users')
->join('app_datas', 'app_datas.customer_number', '=', 'app_users.customer_number')
->select('app_users.email')->orderBy('app_users.id', 'desc');
$emails->chunk($chunkSize, function($emailChunk) use (&$jobDelayTimeKeeper, $jobDelay) {
Queue::later($jobDelayTimeKeeper, new UpdateMmpAppListJob($emailChunk));
$jobDelayTimeKeeper = $jobDelayTimeKeeper + $jobDelay;
});
}

Custom shipping calculator which takes destination into account

I am trying to create a custom calculator which calculates shipping costs based of a deleivery address. For now, I will be hardcoding different fees according to a postcode prefix... e.g.
SK1 = 4
SK2 = 4
SK3 = 4
SK4 = 4
M1 = 6
M2 = 6
M3 = 6
M4 = 6
Everything else = 10
I am following the tutorial here.
The code stub I have is as follows:
<?php
/**
* Created by IntelliJ IDEA.
* User: camerona
* Date: 03/03/2017
* Time: 08:09
*/
namespace AppBundle\Shipping;
use Sylius\Component\Shipping\Calculator\CalculatorInterface;
use Sylius\Component\Shipping\Model\ShippingSubjectInterface;
class PostCodeCalculator implements CalculatorInterface
{
public function calculate(ShippingSubjectInterface $subject, array $configuration)
{
return $this->postCodeService->getShippingCostForPostCode($subject->getShippingAddress());
}
public function getType()
{
// TODO: Implement getType() method.
}
}
Is there a way in sylius where I can get access to the shipping address of an order? The ShippingSubjectInterface only allows access to volume, weight, items, and shippables.
/** #var $subject Shipment */
$postCode = $subject->getOrder()->getShippingAddress()->getPostCode();
Allowed me get address from subject.

Bulk removal of Edges on Titan 1.0

I have a long list of edge IDs (about 12 billion) that I am willing to remove from my Titan graph (which is hosted on an HBase backend).
How can I do it quickly and efficiently?
I tried removing the edges via Gremlin, but that is too slow for that amount of edges.
Is it possible to directly perform Delete commands on HBase? How can I do it? (How do I assemble the Key to delete?)
Thanks
After two days of research, I came up with a solution.
The main purpose - given a very large collection of string edgeIds, implementing a logics which removes them from the graph -
The implementation has to support a removal of billions of edges, so it must be efficient in memory and time.
Direct usage of Titan is disqualified, since Titan performs a lot of unnecessary instantiations which are redundant -- generally, we don't want to load the edges, we just want to remove them from HBase.
/**
* Deletes the given edge IDs, by splitting it to chunks of 100,000
* #param edgeIds Collection of edge IDs to delete
* #throws IOException
*/
public static void deleteEdges(Iterator<String> edgeIds) throws IOException {
IDManager idManager = new IDManager(NumberUtil.getPowerOf2(GraphDatabaseConfiguration.CLUSTER_MAX_PARTITIONS.getDefaultValue()));
byte[] columnFamilyName = "e".getBytes(); // 'e' is your edgestore column-family name
long deletionTimestamp = System.currentTimeMillis();
int chunkSize = 100000; // Will contact HBase only once per 100,000 records two deletes (=> 50,000 edges, since each edge is removed one time as IN and one time as OUT)
org.apache.hadoop.conf.Configuration config = new org.apache.hadoop.conf.Configuration();
config.set("hbase.zookeeper.quorum", "YOUR-ZOOKEEPER-HOSTNAME");
config.set("hbase.table", "YOUR-HBASE-TABLE");
List<Delete> deletions = Lists.newArrayListWithCapacity(chunkSize);
Connection connection = ConnectionFactory.createConnection(config);
Table table = connection.getTable(TableName.valueOf(config.get("hbase.table")));
Iterators.partition(edgeIds, chunkSize)
.forEachRemaining(edgeIdsChunk -> deleteEdgesChunk(edgeIdsChunk, deletions, table, idManager,
columnFamilyName, deletionTimestamp));
}
/**
* Given a collection of edge IDs, and a list of Delete object (that is cleared on entrance),
* creates two Delete objects for each edge (one for IN and one for OUT),
* and deletes it via the given Table instance
*/
public static void deleteEdgesChunk(List<String> edgeIds, List<Delete> deletions, Table table, IDManager idManager,
byte[] columnFamilyName, long deletionTimestamp) {
deletions.clear();
for (String edgeId : edgeIds)
{
RelationIdentifier identifier = RelationIdentifier.parse(edgeId);
deletions.add(createEdgeDelete(idManager, columnFamilyName, deletionTimestamp, identifier.getRelationId(),
identifier.getTypeId(), identifier.getInVertexId(), identifier.getOutVertexId(),
IDHandler.DirectionID.EDGE_IN_DIR);
deletions.add(createEdgeDelete(idManager, columnFamilyName, deletionTimestamp, identifier.getRelationId(),
identifier.getTypeId(), identifier.getOutVertexId(), identifier.getInVertexId(),
IDHandler.DirectionID.EDGE_OUT_DIR));
}
try {
table.delete(deletions);
}
catch (IOException e)
{
logger.error("Failed to delete a chunk due to inner exception: " + e);
}
}
/**
* Creates an HBase Delete object for a specific edge
* #return HBase Delete object to be used against HBase
*/
private static Delete createEdgeDelete(IDManager idManager, byte[] columnFamilyName, long deletionTimestamp,
long relationId, long typeId, long vertexId, long otherVertexId,
IDHandler.DirectionID directionID) {
byte[] vertexKey = idManager.getKey(vertexId).getBytes(0, 8); // Size of a long
byte[] edgeQualifier = makeQualifier(relationId, otherVertexId, directionID, typeId);
return new Delete(vertexKey)
.addColumn(columnFamilyName, edgeQualifier, deletionTimestamp);
}
/**
* Cell Qualifier for a specific edge
*/
private static byte[] makeQualifier(long relationId, long otherVertexId, IDHandler.DirectionID directionID, long typeId) {
WriteBuffer out = new WriteByteBuffer(32); // Default length of array is 32, feel free to increase
IDHandler.writeRelationType(out, typeId, directionID, false);
VariableLong.writePositiveBackward(out, otherVertexId);
VariableLong.writePositiveBackward(out, relationId);
return out.getStaticBuffer().getBytes(0, out.getPosition());
}
Keep in mind that I do not consider System Types and so -- I assume that the given edge IDs are user-edges.
Using this implementation I was able to remove 20 million edges in about 2 minutes.

Implementing an OPC DA client from scratch

I would like to implement my own OPC DA client (versions 2.02, 2.05a, 3.00) from scratch but without using any third-party. Also I would like to make use of OPCEnum.exe service to get a list of installed OPC servers. Is there any kind of document that explains detailed and step by step the process to implement an OPC client?
I have a c# implementation but actually it's hard to fit it in here. I'll try to summarize the steps required.
Mostly you need to have OpcRcw.Comn.dll and OpcRcw.Da.dll from the OPC Core Components Redistributable package downloable for free from Opcfoundation.org. Once installed, the files are located in C:\Windows\assembly\GAC_MSIL. Create a reference in your project.
About coding, this is what you should do (there are three objects you want to implement, Server, Group and Item):
Let's start with server:
Type typeofOPCserver = Type.GetTypeFromProgID(serverName, computerName, true);
m_opcServer = (IOPCServer)Activator.CreateInstance(typeofOPCserver);
m_opcCommon = (IOPCCommon)m_opcServer;
IConnectionPointContainer icpc = (IConnectionPointContainer)m_opcServer;
Guid sinkGUID = typeof(IOPCShutdown).GUID;
icpc.FindConnectionPoint(ref sinkGUID, out m_OPCCP);
m_OPCCP.Advise(this, out m_cookie_CP);
I've striped a LOT of checking to fit it in here, take it as a sample...
Then you need a method on server to add groups:
// Parameter as following:
// [in] active, so do OnDataChange callback
// [in] Request this Update Rate from Server
// [in] Client Handle, not necessary in this sample
// [in] No time interval to system UTC time
// [in] No Deadband, so all data changes are reported
// [in] Server uses english language to for text values
// [out] Server handle to identify this group in later calls
// [out] The answer from Server to the requested Update Rate
// [in] requested interface type of the group object
// [out] pointer to the requested interface
m_opcServer.AddGroup(m_groupName, Convert.ToInt32(m_isActive), m_reqUpdateRate, m_clientHandle, pTimeBias, pDeadband, m_LocaleID, out m_serverHandle, out m_revUpdateRate, ref iid, out objGroup);
// Get our reference from the created group
m_OPCGroupStateMgt = (IOPCGroupStateMgt)objGroup;
Finally you need to create items:
m_OPCItem = (IOPCItemMgt)m_OPCGroupStateMgt;
m_OPCItem.AddItems(itemList.Length, GetAllItemDefs(itemList), out ppResults, out ppErrors);
Where itemlist is an array of OPCITEMDEF[]. I build the above using GetAllItemDefs from a structure of mine.
private static OPCITEMDEF[] GetAllItemDefs(params OpcItem[] opcItemList)
{
OPCITEMDEF[] opcItemDefs = new OPCITEMDEF[opcItemList.Length];
for (int i = 0; i < opcItemList.Length; i++)
{
OpcItem opcItem = opcItemList[i];
opcItemDefs[i].szAccessPath = "";
opcItemDefs[i].bActive = Convert.ToInt32(opcItem.IsActive);
opcItemDefs[i].vtRequestedDataType = Convert.ToInt16(opcItem.ItemType, CultureInfo.InvariantCulture);
opcItemDefs[i].dwBlobSize = 0;
opcItemDefs[i].pBlob = IntPtr.Zero;
opcItemDefs[i].hClient = opcItem.ClientHandle;
opcItemDefs[i].szItemID = opcItem.Id;
}
return opcItemDefs;
}
Finally, about enumerating Servers, I use this two functions:
/// <summary>
/// Enumerates hosts that may be accessed for server discovery.
/// </summary>
[SecurityPermission(SecurityAction.LinkDemand, UnmanagedCode = true)]
public string[] EnumerateHosts()
{
IntPtr pInfo;
int entriesRead = 0;
int totalEntries = 0;
int result = NetServerEnum(
IntPtr.Zero,
LEVEL_SERVER_INFO_100,
out pInfo,
MAX_PREFERRED_LENGTH,
out entriesRead,
out totalEntries,
SV_TYPE_WORKSTATION | SV_TYPE_SERVER,
IntPtr.Zero,
IntPtr.Zero);
if (result != 0)
throw new ApplicationException("NetApi Error = " + String.Format("0x{0,0:X}", result));
string[] computers = new string[entriesRead];
IntPtr pos = pInfo;
for (int ii = 0; ii < entriesRead; ii++)
{
SERVER_INFO_100 info = (SERVER_INFO_100)Marshal.PtrToStructure(pos, typeof(SERVER_INFO_100));
computers[ii] = info.sv100_name;
pos = (IntPtr)(pos.ToInt32() + Marshal.SizeOf(typeof(SERVER_INFO_100)));
}
NetApiBufferFree(pInfo);
return computers;
}
/// <summary>
/// Returns a list of servers that support the specified specification on the specified host.
/// </summary>
[SecurityPermission(SecurityAction.LinkDemand, UnmanagedCode = true)]
public string[] GetAvailableServers(Specification specification)
{
lock (this)
{
// connect to the server.
ArrayList servers = new ArrayList();
MULTI_QI[] results = new MULTI_QI[1];
GCHandle hIID = GCHandle.Alloc(IID_IUnknown, GCHandleType.Pinned);
results[0].iid = hIID.AddrOfPinnedObject();
results[0].pItf = null;
results[0].hr = 0;
try
{
// create an instance.
Guid srvid = CLSID;
CoCreateInstanceEx(srvid, null, CLSCTX.CLSCTX_LOCAL_SERVER, IntPtr.Zero, 1, results);
m_server = (IOPCServerList2)results[0].pItf;
// convert the interface version to a guid.
Guid catid = new Guid(specification.ID);
// get list of servers in the specified specification.
IOPCEnumGUID enumerator = null;
m_server.EnumClassesOfCategories(1, new Guid[] { catid }, 0, null, out enumerator);
// read clsids.
Guid[] clsids = ReadClasses(enumerator);
// release enumerator
if (enumerator != null && enumerator.GetType().IsCOMObject)
Marshal.ReleaseComObject(enumerator);
// fetch class descriptions.
foreach (Guid clsid in clsids)
{
try
{
string url = CreateUrl(specification, clsid);
servers.Add(url);
}
catch (Exception) { }
}
}
catch
{
}
finally
{
if (hIID.IsAllocated) hIID.Free();
if (m_server != null && m_server.GetType().IsCOMObject)
Marshal.ReleaseComObject(m_server);
}
return (string[])servers.ToArray(typeof(string));
}
}
I know I've striped a lot but maybe it can still help you ;)
Please mark the answer as correct if you think I've been clear ;)
Kind Regards,
D.