How to make Java display box-drawing characters? I get question marks - encoding

At my Win10 CmdPrompt, the 'chcp' command returns: "Active code page: 437".
I think CodePage_437 has box-drawing characters starting around 180 or so.
I wrote a tiny program to display chars 0-255, but get no box-drawing chars.
public class CodePage_437 {
/* Display all 256 glyphs in a 16x16 matrix */
public static void displayAllGlyphs() {
int c = 0;
for ( int y = 0; y < 16; y++) {
for (int x = 0; x < 16; x++) {
System.out.format("%3d", c );
System.out.format("%2c ", c++);
}
System.out.format("\n");
}
}
public static void main(String[] args) {
displayAllGlyphs();
System.out.format("---EOJ---\n");
}
}
I get the same output in the Eclipse 2020-12 Console pane, as in a CmpPrompt
window. It looks like an encoding problem and I must have tried 10 suggested
solutions, but none worked.

Related

How to use selection sort in objects and classes

I'm creating two classes called stop watch and random numbers, which I have already done, but I needed to create a test program that would measure the execution time of sorting 100,000 numbers using selection sort. I know how to create a selection sort, I just don't know how to take the random numbers class and put it together with the selection sort, I get the error message "incompatible types random numbers cannot be converted to int" I hope someone can help me.
My random numbers class
import java.util.Random;
public class randomnumbers {
Random ran1 = new Random();
private int size;
public randomnumbers(){
size = 100000;
}
public int getSize(){
return size;
}
public void setSize(int newSize){
size = newSize;
}
public int [] createArray(int [] size){
for (int i = 0; i < size.length; i++){
size[i] = ran1.nextInt();
}
return size;
}
public static void printArray (int [] array){
for (int i = 0; i < array.length; i++){
if (i < 0){
System.out.println(array[i] + " ");
}
}
}
}
My test Program
public static void main (String [] args){
// Create a StopWatch object
StopWatch timer = new StopWatch();
//create random numbers
randomnumbers numbers = new randomnumbers();
//Create the size of the array
numbers.getSize();
// Invoke the start method in StopWatch class
timer.start();
//sort random numbers
selectionSort();
// Invoke the stop method in StopWatch class
timer.stop();
// Display the execution time
System.out.println("The execution time for sorting 100,000 " +
"numbers using selection sort: " + timer.getElapsedTime() +
" milliseconds");
}
// selectionSort performs a selection sort on an array
public static void selectionSort(int[] array) {
for (int i = 0; i < array.length - 1; i++) {
int min = array[i];
int minIndex = i;
for (int j = i + 1; j < array.length; j++) {
if (array[j] < min) {
min = array[j];
minIndex = j;
}
}
if (i != minIndex) {
array[minIndex] = array[i];
array[i] = min;
}
}
}
}
Where exactly are you getting "incompatible types random numbers cannot be converted to int" error?
There are multiple issues with the code:
Unconventional naming
size field is in randomnumbers class is used as actual array size in constructor but in createArray it's overshadowed with a parameter of the same name but different type and meaning.
You are not passing any array to selectionSort in Main. This is where I get compile error on your code.
printArray has if (i < 0) condition that is false for all ran1.nextInt() numbers so it will not print anything.
Feeding selectionSort with numbers.createArray(new int[numbers.getSize()]) compiles and ends up sorting the array.

ASCII conversion

I wanted to convert ASCII values to its corresponding characters so I wrote this simple code:
public class Test {
public static void main(String[] args) {
int i=0;
char ch='c';
for(i=0;i<127;i++)
{
ch=(char)i;
System.out.print(ch+"\t");
}
System.out.println("finish");
}
}
But as output it's showing nothing and along with that the control is not even getting out of the loop though the process gets finished..plz explain this kind of behavior and the right code.
As other people have pointed out, you have included the control characters; if you alter the loop (as below) you get the full set, excluding these control characters:
public static void main() {
for(int i = 33; i < 127; i++)
{
char ch = (char) i;
System.out.print(i + ":" + ch + "\t");
}
System.out.println("finish");
}

Attempts to call a method in the same class not working (java)

I'm creating a random number generator which then sorts the digits from largest to smallest. Initially it worked but then I changed a few things. As far as I'm aware I undid all the changes (ctrl + z) but now I have errors at the points where i try to call the methods. This is probably a very amateur problem but I haven't found an answer. The error i'm met with is "method in class cannot be applied to given types"
Here's my code:
public class RandomMath {
public static void main(String[] args) {
String bigger = bigger(); /*ERROR HERE*/
System.out.println(bigger);
}
//create method for generating random numbers
public static int generator(int n){
Random randomGen = new Random();
//set max int to 10000 as generator works between 0 and n-1
for(int i=0; i<1; i++){
n = randomGen.nextInt(10000);
// exclude 1111, 2222, 3333, 4444, 5555, 6666, 7777, 8888, 9999, 0000
if((n==1111 || n==2222 || n==3333 || n ==4444 || n==5555)
||(n==6666 || n==7777 || n==8888 || n==9999 || n==0000)){
i--;
}
}
return n;
}
//create method for denoting the bigger number
public static String bigger(int generated){
generated = generator(); /*ERROR HERE*/
System.out.println(generated);
int[] times = new int[10];
while (generated != 0) {
int val = generated % 10;
times[val]++;
generated /= 10;
}
String bigger = "";
for (int i = 9; i >= 0; i--) {
for (int j = 0; j < times[i]; j++) {
bigger += i;
}
}
return bigger;
}
}
You have not defined a method bigger(), only bigger(int generated). Therefore, you must call your bigger method with an integer parameter.

Passing array to stringstream

This is something I had never noticed but for some reason you can't do something like
sstr << myarray;
If you do that "sstr" would contain the address of "myarray",You would have to do
for(int i;i < sizeof(myarray);i++)
{
sstr << myarray[i];
}
I would like to know why does this happens, I don't remember ever having to do that, but personally I think sometimes reality itself changes just to annoy me.
You need to define your own stream operation for an array to be able to display it.
#include <iostream>
#include <vector>
using namespace std;
template <typename T>
class mydata
{
public:
mydata(int size = 0) { for (int i = 0; i < size; i++) add(0); }
~mydata() { }
void add(T x) { data.push_back(x); }
void remove(int pos) { data.erase(data.begin() - pos); }
T& operator[](int pos) { return data[pos]; }
friend ostream& operator<<(ostream& os, const mydata<T>& name) {
for (int i = 0; i < x.data.size(); i++) os << x.data[i] << " "; return os; }
private:
vector<T> data;
};
That would be a standard class for encapsulating your data. If you wanted to do an operation like mydata<int> b(8); b[7] = 8; you can. The reason the code for the << operator is inside the class is because templates requires typename specific code to be inside the template itself.
Finally, this is how this code can be implemented.
int main()
{
mydata<char> charData;
mydata<int> intData;
for (int i = 0; i < 20; i++) {
charData.add(65+i);
intData.add(i);
}
cout << charData << endl;
cout << intData;
cin.get();
return 0;
}
The output looks like this:
A B C D E F G H I J K L M N O P Q R S T
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19

Java based Neural Network --- how to implement backpropagation

I am building a test neural network and it is definitely not working. My main problem is backpropagation. From my research, I know that it is easy to use the sigmoid function. Therefore, I update each weight by (1-Output)(Output)(target-Output) but the problem with this is what if my Output is 1 but my target is not? If it is one at some point then the weight update will always be 0...For now I am just trying to get the darn thing to add the inputs from 2 input neurons, so the optimal weights should just be 1 as the output neuron simply adds its inputs. I'm sure I have messed this up in lots of places but here is my code:
public class Main {
public static void main(String[] args) {
Double[] inputs = {1.0, 2.0};
ArrayList<Double> answers = new ArrayList<Double>();
answers.add(3.0);
net myNeuralNet = new net(2, 1, answers);
for(int i=0; i<200; i++){
myNeuralNet.setInputs(inputs);
myNeuralNet.start();
myNeuralNet.backpropagation();
myNeuralNet.printOutput();
System.out.println("*****");
for(int j=0; j<myNeuralNet.getOutputs().size(); j++){
myNeuralNet.getOutputs().get(j).resetInput();
myNeuralNet.getOutputs().get(j).resetOutput();
myNeuralNet.getOutputs().get(j).resetNumCalled();
}
}
}
}
package myneuralnet;
import java.util.ArrayList;
public class net {
private ArrayList<neuron> inputLayer;
private ArrayList<neuron> outputLayer;
private ArrayList<Double> answers;
public net(Integer numInput, Integer numOut, ArrayList<Double> answers){
inputLayer = new ArrayList<neuron>();
outputLayer = new ArrayList<neuron>();
this.answers = answers;
for(int i=0; i<numOut; i++){
outputLayer.add(new neuron(true));
}
for(int i=0; i<numInput; i++){
ArrayList<Double> randomWeights = createRandomWeights(numInput);
inputLayer.add(new neuron(outputLayer, randomWeights, -100.00, true));
}
for(int i=0; i<numOut; i++){
outputLayer.get(i).setBackConn(inputLayer);
}
}
public ArrayList<neuron> getOutputs(){
return outputLayer;
}
public void backpropagation(){
for(int i=0; i<answers.size(); i++){
neuron iOut = outputLayer.get(i);
ArrayList<neuron> iOutBack = iOut.getBackConn();
Double iSigDeriv = (1-iOut.getOutput())*iOut.getOutput();
Double iError = (answers.get(i) - iOut.getOutput());
System.out.println("Answer: "+answers.get(i) + " iOut: "+iOut.getOutput()+" Error: "+iError+" Sigmoid: "+iSigDeriv);
for(int j=0; j<iOutBack.size(); j++){
neuron jNeuron = iOutBack.get(j);
Double ijWeight = jNeuron.getWeight(i);
System.out.println("ijWeight: "+ijWeight);
System.out.println("jNeuronOut: "+jNeuron.getOutput());
jNeuron.setWeight(i, ijWeight+(iSigDeriv*iError*jNeuron.getOutput()));
}
}
for(int i=0; i<inputLayer.size(); i++){
inputLayer.get(i).resetInput();
inputLayer.get(i).resetOutput();
}
}
public ArrayList<Double> createRandomWeights(Integer size){
ArrayList<Double> iWeight = new ArrayList<Double>();
for(int i=0; i<size; i++){
Double randNum = (2*Math.random())-1;
iWeight.add(randNum);
}
return iWeight;
}
public void setInputs(Double[] is){
for(int i=0; i<is.length; i++){
inputLayer.get(i).setInput(is[i]);
}
for(int i=0; i<outputLayer.size(); i++){
outputLayer.get(i).resetInput();
}
}
public void start(){
for(int i=0; i<inputLayer.size(); i++){
inputLayer.get(i).fire();
}
}
public void printOutput(){
for(int i=0; i<outputLayer.size(); i++){
System.out.println(outputLayer.get(i).getOutput().toString());
}
}
}
package myneuralnet;
import java.util.ArrayList;
public class neuron {
private ArrayList<neuron> connections;
private ArrayList<neuron> backconns;
private ArrayList<Double> weights;
private Double threshold;
private Double input;
private Boolean isOutput = false;
private Boolean isInput = false;
private Double totalSignal;
private Integer numCalled;
private Double myOutput;
public neuron(ArrayList<neuron> conns, ArrayList<Double> weights, Double threshold){
this.connections = conns;
this.weights = weights;
this.threshold = threshold;
this.totalSignal = 0.00;
this.numCalled = 0;
this.backconns = new ArrayList<neuron>();
this.input = 0.00;
}
public neuron(ArrayList<neuron> conns, ArrayList<Double> weights, Double threshold, Boolean isin){
this.connections = conns;
this.weights = weights;
this.threshold = threshold;
this.totalSignal = 0.00;
this.numCalled = 0;
this.backconns = new ArrayList<neuron>();
this.input = 0.00;
this.isInput = isin;
}
public neuron(Boolean tf){
this.connections = new ArrayList<neuron>();
this.weights = new ArrayList<Double>();
this.threshold = 0.00;
this.totalSignal = 0.00;
this.numCalled = 0;
this.isOutput = tf;
this.backconns = new ArrayList<neuron>();
this.input = 0.00;
}
public void setInput(Double input){
this.input = input;
}
public void setOut(Boolean tf){
this.isOutput = tf;
}
public void resetNumCalled(){
numCalled = 0;
}
public void setBackConn(ArrayList<neuron> backs){
this.backconns = backs;
}
public Double getOutput(){
return myOutput;
}
public Double getInput(){
return totalSignal;
}
public Double getRealInput(){
return input;
}
public ArrayList<Double> getWeights(){
return weights;
}
public ArrayList<neuron> getBackConn(){
return backconns;
}
public Double getWeight(Integer i){
return weights.get(i);
}
public void setWeight(Integer i, Double d){
weights.set(i, d);
}
public void setOutput(Double d){
myOutput = d;
}
public void activation(Double myInput){
numCalled++;
totalSignal += myInput;
if(numCalled==backconns.size() && isOutput){
System.out.println("Total Sig: "+totalSignal);
setInput(totalSignal);
setOutput(totalSignal);
}
}
public void activation(){
Double activationValue = 1 / (1 + Math.exp(input));
setInput(activationValue);
fire();
}
public void fire(){
for(int i=0; i<connections.size(); i++){
Double iWeight = weights.get(i);
neuron iConn = connections.get(i);
myOutput = (1/(1+(Math.exp(-input))))*iWeight;
iConn.activation(myOutput);
}
}
public void resetInput(){
input = 0.00;
totalSignal = 0.00;
}
public void resetOutput(){
myOutput = 0.00;
}
}
OK so that is a lot of code so allow me to explain. The net is simple for now, just an input layer and an output layer --- I want to add a hidden layer later but I'm taking baby steps for now. Each layer is an arraylist of neurons. Input neurons are loaded with inputs, a 1 and a 2 in this example. These neurons fire, which calculates the sigmoid of the inputs and outputs that to the output neurons, which adds them and stores the value. Then the net backpropagates by taking the (answer-output)(output)(1-output)(output of the specific input neuron) and updates the weights accordingly. A lot of times, it cycles through and I get infinity, which seems to correlate with negative weights or sigmoid. When that doesn't happen it converges to 1 and since (1-output of 1) is 0, my weights stop updating.
The numCalled and totalSignal values are just so the algorithm waits for all neuron inputs before continuing. I know I'm doing this an odd way, but the neuron class has an arraylist of neurons called connections to hold the neurons that it is forward connected to. Another arraylist called backconns holds the backward connections. I should be updating the correct weights as well since I am getting all back connections between neurons i and j but of all neurons j (the layer above i) I am only pulling weight i. I apologize for the messiness --- I've been trying lots of things for hours upon hours now and still cannot figure it out. Any help is greatly appreciated!
Some of the best textbooks on neural networks in general are Chris Bishop's and Simon Haykin's. Try reading through the chapter on backprop and understand why the terms in the weight update rule are the way they are.The reason why I am asking you to do that is that backprop is more subtle than it seems at first. Things change a bit if you use a linear activation function for the output layer (think about why you might want to do that. Hint: post-processing), or if you add a hidden layer. It got clearer for me when I actually read the book.
You might want to compare your code to this single layer perceptron.
I think you have a bug in your backprop algo. Also, try replacing the sigmoid with a squarewave.
http://web.archive.org/web/20101228185321/http://en.literateprograms.org/Perceptron_%28Java%29
what if my Output is 1 but my target is not?
The sigmoid function 1/(1 + Math.exp(-x)) never equates to 1. The lim as x approaches infinity is equal to 0, but this is a horizontal asymptote, so the function never actually touches 1. Therefore, if this expression is used to compute all of your output values, then your output will never be 1. So (1 - output) shouldn't ever equal 0.
I think your issue is during the calculation of the output. For a neural network, the output for each neuron is typically sigmoid(dot product of inputs and weights). In other words, value = input1 * weight1 + input2 * weight2 + ... (for each weight of neuron) + biasWeight. Then that neuron's output = 1 / (1 + Math.exp(-value). If it's calculated in this way, the output won't ever be equal to 1.