SAS Proc Import Specific Range from xlsm file - import

I'm in the need of importing an xlsm file and pulling just one cell value from a specific spreadsheet.
I've tried using the below but get a 'CLI error trying to establish connection' error. I do have to use the rsubmit blocks. What am I doing wrong?
RSUBMIT INHERITLIB=(mywork);
OPTIONS msglevel=i VALIDVARNAME= any;
proc import datafile="\\mysite.com\folder1\folder2\myfile.xlsm"
dbms=EXCELCS replace out=Output;
range="EmailSummary$O5";
run;
ENDRSUBMIT;

If you want to import only one cell then you need to tell IMPORT not to look for names and also give it both the upper left and lower right cell of the range.
getnames=no;
range="EmailSummary$O5:O5";

Related

Why is my specified range in PROC IMPORT being ignored?

I am trying to import a set of exchange rates. The data set lookes like this:
That is to say the actual data should be read from row 5 and downwards from the sheet named "Växelkurser". The variable names should be read from row 4.
I try writing the following code:
PROC IMPORT
DATAFILE="/opt3/01_Dataleveranser/03_IBIS/Inläsning/IBIS3/Växelkurser macrobond/Växelkurser19DEC2022.xlsx"
OUT=WORK.VALUTOR_0000
DBMS=xlsx
REPLACE;
sheet="Växelkurser";
getnames=yes;
range="Växelkurser$A4:0";
RUN;
And I get the following result:
I clearly specified that SAS should start reading from the fourth row and that the variable names should be read from that row. Why is this being ignored and how would I make this work?
The problem seems to be that you are specifying both sheet= and range=. The sheet statement is telling SAS to read the whole sheet and I think this is overriding the later range statment.
Remove the following line and the code should work as expected:
sheet="Växelkurser";

Issues with "QUERY(IMPORTRANGE)"

Here's my first question on this forum, though I've read through a lot of good answers here.
Can anyone tell me what I'm doing wrong with my attempt to do a query import from one sheet to a column in another?
Here's the formula I've tried, but all my adjustments still get me a parsing error.
=QUERY(IMPORTRANGE("https://docs.google.com/spreadsheets/d/1yGPdI0eBRNltMQ3Wr8E2cw-wNlysZd-XY3mtAnEyLLY/edit#gid=163356401","Master Treatment Log (Responses)!V2:V")"WHERE Col8="'&B2&'")")
Note that importrange is only needed for imports between spreadsheets. If you only import from one sheet into another within the same spreadsheet I would suggest using filter() or query().
Assuming the value in B2 is actually a string (and not a number), you can try
=QUERY(IMPORTRANGE("https://docs.google.com/spreadsheets/d/1yGPdI0eBRNltMQ3Wr8E2cw-wNlysZd-XY3mtAnEyLLY/edit#gid=163356401","Master Treatment Log (Responses)!V2:V"), "WHERE Col8="'&B2&'", 0)
Note the added comma before "WHERE". If you want to import a header row, change 0 to 1.
See if that helps? If not, please share a copy of your spreadsheet (sensitive data erased).

How to remove instance or row with missing values in Python Script in Orange Data Mining?

I want to remove an instance or row with missing values.
It's so simple to do it by using Impute widget, but now I want to do it in Python Script Widget.
How do I do this?
Write this in Python Script widget:
import numpy as np
from Orange.preprocess import impute
drop_instances = impute.DropInstances()
var = in_data.domain.attributes[0] # choose a variable you wanna check
mask = drop_instances(in_data, var)
out_data = in_data[[np.logical_not(mask)]]
If you need more information, you are welcome to comment a question below!

neo4j import script endless loop because 2 properties with same name

I just managed to freeze my whole environment with a cypher import script. The process was running with 99% CPU uncontrollably until we killed it.
I am not sure, but I think the bug was in the import script - trying to set 2 properties with the same name - reading like
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM 'file:///import.csv' AS import FIELDTERMINATOR ';'
... (some WITH / WHERE clauses)
CREATE (:Mylabel {myproperty: import.column1, myproperty: import.column2});
Does anyone have experience with behaviour like that?
EDIT:
I am not allowed to copypaste the exact code, but I can try and leave it semantically intact:
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM 'file:///db.csv' AS row FIELDTERMINATOR ';'
WITH row
WHERE row.typerow = 'Some_Identifier'
WITH head(collect(row.id)) as aid, row.exclusive AS excl, toInteger(row.alwsel) AS alwsel
CREATE (:Mylabel:Mytype {aid: toInteger(aid), exclusive: toString(excl),
exclusive: CASE WHEN alwsel=1 THEN true ELSE NULL END});
As was inquired below: there is no constraint on the property in question. I am currently not able to do any tests. I will be in a few days.

Why does Open XML API Import Text Formatted Column Cell Rows Differently For Every Row

I am working on an ingestion feature that will take a strongly formatted .xlsx file and import the records to a temp storage table and then process the rows to create db records.
One of the columns is strictly formatted as "Text" but it seems like the Open XML API handles the columns cells differently on a row-by-row basis. Some of the values while appearing to be numeric values are truly not (which is why we format the column as Text) -
some examples are "211377", "211727.01", "209395.388", "209395.435"
what these values represent is not important but what happens is that some values (using the Open XML API v2.5 library) will be read in properly as text whether retrieved from the Shared Strings collection or simply from InnerXML property while others get sucked in as numbers with what appears to be appended rounding or precision.
For example the "211377", "211727.01" and "209395.435" all come in exactly as they are in the spreadsheet but the "209395.388" value is being pulled in as "209395.38800000001" (there are others that this happens to as well).
There seems to be no rhyme or reason to which values get messed up and which ones which import fine. What is really frustrating is that if I use the native Import feature in SQL Server Management Studio and ingest the same spreadsheet to a temp table this does not happen - so how is that the SSMS import can handle these values as purely text for all rows but the Open XML API cannot.
To begin the answer you main problem seems to be values,
"209395.388" value is being pulled in as "209395.38800000001"
Yes in .xlsx file value is stored as 209395.38800000001 instead of 209395.388. And it's the correct format to store floating point numbers; nothing wrong in it. You van simply confirm it by following code snippet
string val = "209395.38800000001"; // <= What we extract from Open Xml
Console.WriteLine(double.Parse(val)); // < = Simply pass it to double and print
The output is :
209395.388 // <= yes the expected value
So there's nothing wrong in the value you extract from .xlsx using Open Xml SDK.
Now to cells, yes cell can have verity of formats. Numbers, text, boleans or shared string text. And you can styles to a cell which would format your string to a desired output in Excel. (Ex - Date Time format, Forced strings etc.). And this the way Excel handle the vast verity of data. It need this kind of formatting and .xlsx file format had to be little complex to support all.
My advice is to use a proper parse method set at extracted values to identify what format it represent (For example to determine whether its a number or a text) and apply what type of parse.
ex : -
string val = "209395.38800000001";
Console.WriteLine(float.Parse(val)); // <= Float parse will be deduce a different value ; 209395.4
Update :
Here's how value is saved in internal XML
Try for yourself ;
Make an .xlsx file with value 209395.388 -> Change extention to .zip -> Unzip it -> goto worksheet folder -> open Sheet1
You will notice that value is stored as 209395.38800000001 as scene in attached image.. So nothing wrong on API for extracting stored number. It's your duty to decide what format to apply.
But if you make the whole column Text before adding data, you will see that .xlsx hold data as it is; simply said as string.