How to print out a Hash table in Matlab? - matlab

I have a code, in which a Hashtable is created using java.util.Hashtable();
Now, I'm gonna know how to see the content of that table, i.e. how to print out that Hashtable?

Iterate means to loop through, to go through one by one in a collection.
Here is a code example.
import java.util.Hashtable;
import java.util.Map.Entry;
public class IterateHashtable {
public static void main(String[] args) {
// creating myHash object
Hashtable<String, String> myHash = new Hashtable<String, String>();
// adding key-value pairs to myHash
myHash.put("Apple", "Red");
myHash.put("Banana", "Yellow");
myHash.put("Guava", "Green");
// Get entrySet() and Iterate using for-each loop
for(Map.Entry<String, String> entry1 : myHash.entrySet()) {
System.out.println("Fruit : " + entry1.getKey() + " Color : " + entry1.getValue());
}
}
}

Related

Writable Classes in mapreduce

How can i use the values from hashset (the docid and offset) to the reduce writable so as to connect map writable with reduce writable?
The mapper (LineIndexMapper) works fine but in the reducer (LineIndexReducer) i get the error that it can't get string as argument when i type this:
context.write(key, new IndexRecordWritable("some string");
although i have the public String toString() in the ReduceWritable too.
I believe the hashset in reducer's writable (IndexRecordWritable.java) maybe isn't taking the values correctly?
I have the below code.
IndexMapRecordWritable.java
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.Writable;
public class IndexMapRecordWritable implements Writable {
private LongWritable offset;
private Text docid;
public LongWritable getOffsetWritable() {
return offset;
}
public Text getDocidWritable() {
return docid;
}
public long getOffset() {
return offset.get();
}
public String getDocid() {
return docid.toString();
}
public IndexMapRecordWritable() {
this.offset = new LongWritable();
this.docid = new Text();
}
public IndexMapRecordWritable(long offset, String docid) {
this.offset = new LongWritable(offset);
this.docid = new Text(docid);
}
public IndexMapRecordWritable(IndexMapRecordWritable indexMapRecordWritable) {
this.offset = indexMapRecordWritable.getOffsetWritable();
this.docid = indexMapRecordWritable.getDocidWritable();
}
#Override
public String toString() {
StringBuilder output = new StringBuilder()
output.append(docid);
output.append(offset);
return output.toString();
}
#Override
public void write(DataOutput out) throws IOException {
}
#Override
public void readFields(DataInput in) throws IOException {
}
}
IndexRecordWritable.java
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import java.util.HashSet;
import org.apache.hadoop.io.Writable;
public class IndexRecordWritable implements Writable {
// Save each index record from maps
private HashSet<IndexMapRecordWritable> tokens = new HashSet<IndexMapRecordWritable>();
public IndexRecordWritable() {
}
public IndexRecordWritable(
Iterable<IndexMapRecordWritable> indexMapRecordWritables) {
}
#Override
public String toString() {
StringBuilder output = new StringBuilder();
return output.toString();
}
#Override
public void write(DataOutput out) throws IOException {
}
#Override
public void readFields(DataInput in) throws IOException {
}
}
Alright, here is my answer based on a few assumptions. The final output is a text file containing the key and the file names separated by a comma based on the information in the reducer class's comments on the pre-condition and post-condition.
In this case, you really don't need IndexRecordWritable class. You can simply write to your context using
context.write(key, new Text(valueBuilder.substring(0, valueBuilder.length() - 1)));
with the class declaration line as
public class LineIndexReducer extends Reducer<Text, IndexMapRecordWritable, Text, Text>
Don't forget to set the correct output class in the driver.
That must serve the purpose according to the post-condition in your reducer class. But, if you really want to write a Text-IndexRecordWritable pair to your context, there are two ways approach it -
with string as an argument (based on your attempt passing a string when you IndexRecordWritable class constructor is not designed to accept strings) and
with HashSet as an argument (based on the HashSet initialised in IndexRecordWritable class).
Since your constructor of IndexRecordWritable class is not designed to accept String as an input, you cannot pass a string. Hence the error you are getting that you can't use string as an argument. Ps: if you want your constructor to accept Strings, you must have another constructor in your IndexRecordWritable class as below:
// Save each index record from maps
private HashSet<IndexMapRecordWritable> tokens = new HashSet<IndexMapRecordWritable>();
// to save the string
private String value;
public IndexRecordWritable() {
}
public IndexRecordWritable(
HashSet<IndexMapRecordWritable> indexMapRecordWritables) {
/***/
}
// to accpet string
public IndexRecordWritable (String value) {
this.value = value;
}
but that won't be valid if you want to use the HashSet. So, approach #1 can't be used. You can't pass a string.
That leaves us with approach #2. Passing a HashSet as an argument since you want to make use of the HashSet. In this case, you must create a HashSet in your reducer before passing it as an argument to IndexRecordWritable in context.write.
To do this, your reducer must look like this.
#Override
protected void reduce(Text key, Iterable<IndexMapRecordWritable> values, Context context) throws IOException, InterruptedException {
//StringBuilder valueBuilder = new StringBuilder();
HashSet<IndexMapRecordWritable> set = new HashSet<>();
for (IndexMapRecordWritable val : values) {
set.add(val);
//valueBuilder.append(val);
//valueBuilder.append(",");
}
//write the key and the adjusted value (removing the last comma)
//context.write(key, new IndexRecordWritable(valueBuilder.substring(0, valueBuilder.length() - 1)));
context.write(key, new IndexRecordWritable(set));
//valueBuilder.setLength(0);
}
and your IndexRecordWritable.java must have this.
// Save each index record from maps
private HashSet<IndexMapRecordWritable> tokens = new HashSet<IndexMapRecordWritable>();
// to save the string
//private String value;
public IndexRecordWritable() {
}
public IndexRecordWritable(
HashSet<IndexMapRecordWritable> indexMapRecordWritables) {
/***/
tokens.addAll(indexMapRecordWritables);
}
Remember, this is not the requirement according to the description of your reducer where it says.
POST-CONDITION: emit the output a single key-value where all the file names are separated by a comma ",". <"marcello", "a.txt#3345,b.txt#344,c.txt#785">
If you still choose to emit (Text, IndexRecordWritable), remember to process the HashSet in IndexRecordWritable to get it in the desired format.

Problems about ISerializationCallbackReceiver In Unity Script API

I am learning something about Serialization in Unity, and know that ISerializationCallbackReceiver can be used to help serialize some complex data structure that cannot serialize directly. I have tested the following code in Unity 2017 and Find some problems.
public class SerializationCallbackScript : MonoBehaviour, ISerializationCallbackReceiver
{
public List<int> _keys = new List<int> { 3, 4, 5 };
public List<string> _values = new List<string> { "I", "Love", "Unity" };
//Unity doesn't know how to serialize a Dictionary
public Dictionary<int, string> _myDictionary = new Dictionary<int, string>();
public void OnBeforeSerialize()
{
_keys.Clear();
_values.Clear();
foreach (var kvp in _myDictionary)
{
_keys.Add(kvp.Key);
_values.Add(kvp.Value);
}
}
public void OnAfterDeserialize()
{
_myDictionary = new Dictionary<int, string>();
for (int i = 0; i != Math.Min(_keys.Count, _values.Count); i++)
_myDictionary.Add(_keys[i], _values[i]);
}
void OnGUI()
{
foreach (var kvp in _myDictionary)
GUILayout.Label("Key: " + kvp.Key + " value: " + kvp.Value);
}
}
Obviously, this example show me how to serialize a dictionary by changing it into a list and resuming when deserialize. However, when i run the codes pratically, I find _keys and _values both empty (keys.Count = 0) in inspector. Even worse, i cannot modify their values in the inspector. As a result, nothing appears when entering the play mode.
So I want to know what is the actual usage of ISerializationCallbackReceiver and the reason of what happened.
OnBeforeSerialize is called in the editor every time the inspector updates (basically always), and it calls clear on your keys and value lists
i changed this for me like that:
#if UNITY_EDITOR
using UnityEditor;
#endif
public class ....{
public void OnBeforeSerialize() {
#if UNITY_EDITOR
if(!EditorApplication.isPlaying
&& !EditorApplication.isUpdating
&& !EditorApplication.isCompiling) return;
#endif
...
so it does not clear on edit-time.

custom item writer to write to database using list of list which contains hashmap

i am a new to spring batch and my requirement is to read a dynamic excel sheet and insert into database.I am able to read excel and pass it to writer but only last record in excel sheet gets inserted into database. Here is my code for item writer
#Bean
public ItemWriter<List<LinkedHashMap<String, String>>> tempwrite() {
JdbcBatchItemWriter<List<LinkedHashMap<String, String>>> databaseItemWriter = new JdbcBatchItemWriter<>();
databaseItemWriter.setDataSource(dataSource);
databaseItemWriter.setSql("insert into table values(next value for seq_table,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)");
ItemPreparedStatementSetter<List<LinkedHashMap<String, String>>> valueSetter =
new databasePsSetter();
databaseItemWriter.setItemPreparedStatementSetter(valueSetter);
return databaseItemWriter;
}
and below is my prepared statement setter class
public class databasePsSetter implements ItemPreparedStatementSetter<List<LinkedHashMap<String, String>>> {
#Override
public void setValues(List<LinkedHashMap<String, String>> item, PreparedStatement ps) throws SQLException {
int columnNumber=1;
for(LinkedHashMap<String, String> row:item){
columnNumber=1;
for (Map.Entry<String, String> entry : row.entrySet()) {
ps.setString(columnNumber, entry.getValue());
columnNumber++;
}
}
}
}
I have seen many examples but all of them is using a dto class but i am not sure whether this is the correct way of implementation for list of list which contains hashmap

How to iterate List<HashMap> in drools file and update the Hash Map objects and return the updated List

Facing Problem with Iteration of List in drl file. I need to Retrieve each HashMap object and check for 'Issue' key. If value of 'issue' is not empty then need to add a value to 'alert' key.
public class ReservationAlerts {
public enum AlertType {
RESERVATIONDETAILSRESPONSE
}
#SuppressWarnings("rawtypes")
private List<HashMap> reservationMap;
#SuppressWarnings("rawtypes")
public List<HashMap> getReservationMap() {
return reservationMap;
}
#SuppressWarnings("rawtypes")
public void setReservationMap(List<HashMap> reservationMap) {
this.reservationMap = reservationMap;
}
}
Main Java Program:
DroolsTest.java
public class DroolsTest {
#SuppressWarnings("rawtypes")
public static final void main(String[] args) {
try {
// load up the knowledge base
KnowledgeBase kbase = readKnowledgeBase();
StatefulKnowledgeSession ksession = kbase.newStatefulKnowledgeSession();
ReservationAlerts rAlerts = new ReservationAlerts();
List<HashMap> hashMapList = new ArrayList<>();
HashMap<String, String> hMap = new HashMap<>();
hMap.put("rId", "101");
hMap.put("fName", "ABC");
hMap.put("lName", "DEF");
hMap.put("issue", "1qaz");
hMap.put("alert", "");
hashMapList.add(hMap);
HashMap<String, String> hMapI = new HashMap<>();
hMapI.put("rId", "102");
hMapI.put("fName", "GHI");
hMapI.put("lName", "JKL");
hMapI.put("issue", "");
hMapI.put("alert", "");
hashMapList.add(hMapI);
rAlerts.setReservationMap(hashMapList);
System.out.println("**********BEFORE************");
System.out.println(hMap.keySet());
System.out.println("****************************");
System.out.println(hMapI.keySet());
System.out.println("****************************");
ksession.insert(rAlerts);
ksession.fireAllRules();
.............
Need to update the HashMap and return the updated List from drl file. Can any one Help me plz
Drool File being triggered from Java File
Reservations.drl
import com.dwh.poc.ReservationAlerts;
import java.util.List;
import java.util.HashMap;
import java.util.Iterator;
// declare any global variables here
dialect "java"
rule "Reservation Alert"
// Retrieve List from ReservationAlerts
when
$rAlerts : ReservationAlerts()
$alertsMapList : List() from $rAlerts.reservationMap
then
// Iterate List and retrieve HashMap object
for(Iterator it = $alertsMapList.iterator();it.hasNext();) {
$alertMap : it.next();
}
The from in your rule will automatically loop over the list returned by $rAlerts.reservationMap. This means that the pattern you need to use in the left hand side of your from is Map and not List.
Once you have the Map pattern you can add the constraint about the 'issue' key.
Try something like this:
rule "Reservation Alert"
when
$rAlerts : ReservationAlerts()
$map : Map(this["issue"] != "") from $rAlerts.reservationMap
then
$map.put("alert", "XXXX");
end
Hope it helps,

How can I make a .NET object friendlier to PowerShell?

I have a.NET class that I want to use from both C# and PowerShell. Cut down to its bare bones, it’s something like this:
class Record
{
Dictionary<string, string> _fields = new Dictionary<string, string>();
public IDictionary<string, string> Fields { get { return _fields; } }
//...other stuff...
}
So I get a Record from somewhere and I can do record.Fields["foo"] = "bar" to modify/add fields from either C# or PowerShell. Works great.
But I’d like to make it a little more PowerShell-friendly. I want to do record.foo = "bar" and have it call the appropriate getters and setters. (I’d like to do the same with C# at some point, probably using dynamic, but that’s a separate fun project). Seems like I need a wrapping proxy class.
I know about add-member but I am worried that it would be slow and use a lot of memory when dealing with tens of thousands of records. I also don’t know how to have it handle record.somenewvalue = "abc".
I’m guessing that I want to create my proxy class in C#, using the facilities in System.Management.Automation or Microsoft.PowerShell but don’t quite know where to start. Can anyone help point me in the right direction?
I figured out one way to do it. Just make Record itself a dictionary. Here's some sample code:
$source = #"
using System.Collections.Generic;
public class Record : Dictionary<string, object>
{
}
"#
add-type -typedefinition $source
$x = new-object record
$x.add('abc', '1') # add
$x['def'] = '2' # indexer
$x.ghi = 3 # "member"
$x
It outputs:
Key Value
--- -----
abc 1
def 2
ghi 3
Here is some code illustrating three suggestions:
public class Record: IEnumerable<KeyValuePair<string, string>>
{
public Record() {}
public Record (Hashtable ht)
{
foreach (var key in ht.Keys)
{
this[key.ToString()] = ht[key].ToString();
}
}
Dictionary<string, string> _fields = new Dictionary<string, string>();
public IDictionary<string, string> Fields { get { return _fields; } }
public string this[string fieldName] {
get {
return _fields[fieldName];
}
set { _fields[fieldName] = value; }
}
//...other stuff...
public IEnumerator<KeyValuePair<string, string>> GetEnumerator()
{
return _fields.GetEnumerator();
}
IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
}
Implement this indexers
Implement IEnumerable> to improve how powershell prints out the output
A constructor taking a single hashtable as a parameter allows you to use #{} notation on initial assignment.
Now here is some powershell illustrating the improved usage:
Add-Type -Path 'C:\Users\zippy\Documents\Visual Studio 2010\Projects\ConsoleApplication1\ClassLibrary1\bin\Debug\ClassLibrary1.dll'
[ClassLibrary1.Record] $foo = New-Object ClassLibrary1.Record;
$foo["bar"]
$foo["bar"] = "value";
$foo["bar"]
[ClassLibrary1.Record] $foo2 = #{
"bar"= "value";
"something"= "to talk about";
};
$foo2;