In SAS, how to concatenate multiple rows into 1 by some ID - merge

I have a table like this
org_ID linenr text
811558672 10 Legevirksomhet.
811560782 10 Clavier Classics er et musikkselskap som produserer komposisjoner og
811560782 20 arrangementer av svært høy kvalitet. De kombinerer den klassiske
811560782 30 musikktradisjonen med moderne teknikker og deres kunder spenner fra
811560782 40 individuelle musikere til ensembler, festivalarrangører, konserthus,
811560782 50 kulturinstitusjoner, eventskapere og mediaprodusenter.
811560812 10 Grafisk design, illustrasjon og nærliggende virksomhet.
811561592 10 Sosial- og helsetjenesten. Konsulentvirksomhet: Veiledning til
811561592 20 foreldre, fosterhjem, skole og barnehage.
As you can see, for some org_ID, they appear multiple times because one line of text is not enough for them. When this happens, the linenr shows multiple numbers. Now I want to concatenate multiple lines of text into one when org_ID is the same. How shall I do this? Many thanks in advance.

Use SAS Retain functionality to concatenate the text and only output when an new org_ID is read.
Note: The two IF statements handles the cases of first row and last row; where there is no Previous ID or no Next ID.
Working Code: (Your Input data must be sorted)
data have;
infile datalines dlm=',' dsd;
length org_ID 8. linenr 8. text $200.;
input org_ID linenr text $;
datalines;
811558672,10, "Legevirksomhet."
811560782,10, "Clavier Classics er et musikkselskap som produserer komposisjoner og"
811560782,20, "arrangementer av svært høy kvalitet. De kombinerer den klassiske"
811560782,30, "musikktradisjonen med moderne teknikker og deres kunder spenner fra"
811560782,40, "individuelle musikere til ensembler, festivalarrangører, konserthus,"
811560782,50, "kulturinstitusjoner, eventskapere og mediaprodusenter."
811560812,10, "Grafisk design, illustrasjon og nærliggende virksomhet."
811561592,10, "Sosial- og helsetjenesten. Konsulentvirksomhet: Veiledning til"
811561592,20, "foreldre, fosterhjem, skole og barnehage."
;
run;
data want;
set have nobs=nobs;
retain longtext;
retain id;
if(_N_=1) then do; longtext=text; id=org_ID; end;
else if org_ID ne id then do; output; longtext=text; id=org_ID; end;
else longtext=cats(longtext,text);
if (_N_=nobs) then do; output; end;
keep org_ID longtext;
run;
Output:
org_ID=811560782 longtext=Legevirksomhet.
org_ID=811560812 longtext=Clavier Classics er et musikkselskap som produserer komposisjoner ogarrangementer av svært høy kva
org_ID=811561592 longtext=Grafisk design, illustrasjon og nærliggende virksomhet.
org_ID=811561592 longtext=Sosial- og helsetjenesten. Konsulentvirksomhet: Veiledning tilforeldre, fosterhjem, skole og barneha

A DOW loop can accumulate each line of text in the org_ID group into a final longtext. The longtext should be assigned a specific length in order to prevent truncations that may occur if default lengths are used. You may or may not want a space separator between lines that are concatenated.
data want(keep=org_ID longtext);
do until (last.org_ID);
set have;
by org_ID;
length longtext $2000;
longtext = catx(' ', longtext, text);
end;
run;
If the data is not sorted, but the org_ID rows are contiguous, you can use
by org_ID notsorted;
So what is happening ?
Longtext is a non dataset variable, so it is automatically reset to missing at the top of the data step.
The data step iterates for each row in the group until the last row in the group.
The length of the variable longtext is specified after the set statement so it be the last variable in program data vector (pdv), and thus be the second column of the kept variables.
catx is used accumulate the concatenations of the text data within the group. A space is used to separate the text data parts.
If you do not want the space separator, accumulate using
longtext = cats(longtext, text);

Related

Defining Fixed SAS Macro Variables

I am trying to have a macro run but I'm not sure if it will resolve since I don't have connection to my database for a little while. I want to know if the macro is written correctly and will resolve the states on each pass through the code (ie do it repetitively and create a table for each state).
The second thing I would like to know is if I can run a macro through a from statement. For example let entpr be the database that I'm pulling from. Would the following resolve correctly:
proc sql;
select * from entpr.&state.; /*Do I need the . after &state?*/
The rest of my code:
libname mdt "........."
%let state = ny il ar ak mi;
proc sql;
create table mdt.&state._members
as select
corp_ent_cd
,mkt_sgmt_admnstn_cd
,fincl_arngmt_cd
,aca_ind
,prod_type
,cvyr
,cvmo
,sum(1) as mbr_cnt
from mbrship1_&state.
group by 1,2,3,4,5,6,7;
quit;
If &state contains ny il ar ak mi then as it is written, the from statement in your code will resolve to: from mbrship1_ny il ar ak mi - which is invalid SQL syntax.
My guess is that you're wanting to run the SQL statement for each of the following tables:
mbrship1_ny
mbrship1_il
mbrship1_ar
mbrship1_ak
mbrship1_mi
In which case the simplest macro would look something like this:
%macro do_sql(state=);
proc sql;
create table mdt.&state._members
as select
...
from mbrship1_&state
group by 1,2,3,4,5,6,7;
quit;
%mend;
%do_sql(state=ny);
%do_sql(state=il);
%do_sql(state=ar);
%do_sql(state=ak);
%do_sql(state=mi);
As to your question regarding whether or not to include the . the rule is that if the character following your macro variable is not a-Z, 0-9, or the underscore, then the period is optional. Those characters are the list of valid characters for a macro variable name, so as long as it's not one of those you don't need it as SAS will be able to identify where the name of the macro finishes. Some people always include it, personally I leave it out unless it's required.
When selecting data from multiple tables, whose names themselves contain some data (in your case the state) you can stack the data with:
UNION ALL in SQL
SET in Data step
As long as you are stacking data, you should also add a new column to the query selection that tracks the state.
Consider this pattern for stacking in SQL
data one;
do index = 1 to 10; do _n_ = 1 to 2; output; end; end;
run;
data two;
do index = 101 to 110; do _n_ = 1 to 2; output; end; end;
run;
proc sql;
create table want as
select
source, index
from
(select 'one' as source, * from one)
union all
(select 'two' as source, * from two)
;
The pattern can be abstracted into a template for SQL source code that will be generated by macro.
%macro my_ultimate_selector (out=, inlib=, prefix= states=);
%local index n state;
%let n = %sysfunc(countw(&states));
proc sql;
create table &out as
select
state
, corp_ent_cd
, mkt_sgmt_admnstn_cd
, fincl_arngmt_cd
, aca_ind
, prod_type
, cvyr
, cvmo
, count(*) as state_7dim_level_cnt
from
%* ----- use the UNION ALL pattern for stacking data -----;
%do index = 1 %to &n;
%let state = %scan(&states, &index);
%if &index > 1 %then %str(UNION ALL);
(select "&state" as state, * from &inlib..&prefix.&state.)
%end;
group by 1,2,3,4,5,6,7,8 %* this seems to be to much grouping ?;
;
quit;
%mend;
%my_ultimate_selector (out=work.want, inlib=mdt, prefix=mbrship1_, states=ny il ar ak mi)
If the columns of the inlib tables are not identical with regard to column order and type, use a UNION ALL CORRESPONDING to have the SQL procedure line up the columns for you.

SAS hash merge -- smaller dataset as hash object

I'm using the %HASHMERGE macro found at http://www.sascommunity.org/mwiki/images/2/22/Hashmerge.sas and the following example datasets:
data working;
length IID TYPE $12;
input IID $ TYPE $;
datalines;
B 0
B 0
A 1
A 1
A 1
C 2
D 3
;
run;
data master;
length IID FIRST_NAME MIDDLE_NAME LAST_NAME SUFFIX_NAME $12;
input IID $ FIRST_NAME $ MIDDLE_NAME $ LAST_NAME $ SUFFIX_NAME;
datalines;
X John James Smith Sr
Z Sarah Marie Jones .
Y Tim William Miller Jr
C Nancy Lynn Brown .
B Carol Elizabeth Collins .
A Wayne Mark Rooney .
;
run;
On the working dataset, I'm trying to attach the _NAME variables from the master dataset using this hash merge. The output looks fine and IS the desired output. However, in my real-life scenario the master dataset is too large to fit into a hash object and the macro keeps placing it as the hash object. I'd ultimately like to flip these two datasets to where the working dataset is the hash object, but I cannot get the desired output when I flip the code. Below is the part of the macro that produces the desired output and needs adjusted, but I am unsure how to set this up:
data OUTPUT;
if 0 then set MASTER (keep=IID FIRST_NAME MIDDLE_NAME LAST_NAME SUFFIX_NAME)
WORKING (keep=IID);
declare hash h_merge(dataset:"MASTER"); /* I want WORKING to be the hash object since it's smaller! */
rc=h_merge.DefineKey("IID");
rc=h_merge.DefineData("FIRST_NAME","MIDDLE_NAME","LAST_NAME","SUFFIX_NAME");
rc=h_merge.DefineDone();
do while(not eof);
set WORKING (keep=IID) end=eof;
call missing(FIRST_NAME,MIDDLE_NAME,LAST_NAME,SUFFIX_NAME);
rc=h_merge.find();
output;
end;
drop rc;
stop;
run;
Desired output:
IID FIRST_NAME MIDDLE_NAME LAST_NAME SUFFIX_NAME
---------------------------------------------------
B Carol Elizabeth Collins
B Carol Elizabeth Collins
A Wayne Mark Rooney
A Wayne Mark Rooney
A Wayne Mark Rooney
C Nancy Lynn Brown
D
While it's feasible to do what you say, I doubt you'll get that from a non-purpose-built macro. That's because it's not the normal way to do that; typically you want to keep the main dataset in its form and put the relational dataset in the hash table. Usually the sizes are reversed of course - the relational table is usually smaller than the main table.
Personally I would not use hash for this particular case. I'd use a format (or three). Just as fast as a hash and has less of the size issues (since it doesn't have to fit in memory), though it eventually would slow down (but not break!) due to size.
Format solution:
data working;
length IID TYPE $12;
input IID $ TYPE $;
datalines;
B 0
B 0
A 1
A 1
A 1
C 2
D 3
;
run;
data master;
length IID FIRST_NAME MIDDLE_NAME LAST_NAME SUFFIX_NAME $12;
input IID $ FIRST_NAME $ MIDDLE_NAME $ LAST_NAME $ SUFFIX_NAME;
datalines;
X John James Smith Sr
Z Sarah Marie Jones .
Y Tim William Miller Jr
C Nancy Lynn Brown .
B Carol Elizabeth Collins .
A Wayne Mark Rooney .
;
run;
data for_fmt;
set master;
retain type 'char';
length fmtname $32
label $255
start $255
;
start=iid;
*first;
label=first_name;
fmtname='$FIRSTNAMEF';
output;
*last;
label=last_name;
fmtname='$LASTNAMEF';
output;
*middle;
label=middle_name;
fmtname='$MIDNAMEF';
output;
*suffix;
label=suffix_name;
fmtname='$SUFFNAMEF';
output;
if _n_=1 then do;
start=' ';
label=' ';
hlo='o';
fmtname='$FIRSTNAMEF';
output;
fmtname='$LASTNAMEF';
output;
fmtname='$MIDNAMEF';
output;
fmtname='$SUFFNAMEF';
output;
end;
run;
proc sort data=for_fmt;
by fmtname start;
run;
proc format cntlin=for_fmt;
quit;
data want;
set working;
first_name = put(iid,$FIRSTNAMEF.);
last_name = put(iid,$LASTNAMEF.);
middle_name = put(iid,$MIDNAMEF.);
suffix_name = put(iid,$SUFFNAMEF.);
run;
That said...
If you do want to do this in a hash table, what you'd need to do is, for each row in MASTER, do a FIND in the working table, then if successful a REPLACE, then FIND_NEXT and REPLACE until that fails.
The problem? You're doing at least one find per master row, which you yourself noted is very large. If WORKING is 100k and MASTER is 100M, then you're doing 1000 finds for each match. That's very expensive, and probably means you're better off with some other solution.

Replacing Turkish characters with English characters

I have a table which has 120 columns and some of them is including Turkish characters (for example "ç","ğ","ı","ö"). So i want to replace this Turkish characters with English characters (for example "c","g","i","o"). When i use "TRANWRD Function" it could be really hard because i should write the function 120 times and sometimes hte column names could be change so always i have to check the code one by one because of that.
Is there a simple macro which replaces this characters in all columns .
EDIT
In retrospect, this is an overly complicated solution... The translate() function should be used, as pointed by another user. It could be integrated in a SAS function defined with PROC FCMP when used repeatedly.
A combination of regular expressions and a DO loop can achieve that.
Step 1: Build a conversion table in the following manner
Accentuated letters that resolve to the same replacement character are put on a single line, separated by the | symbol.
data conversions;
infile datalines dsd;
input orig $ repl $;
datalines;
ç,c
ğ,g
ı,l
ö|ò|ó,o
ë|è,e
;
Step 2: Store original and replacement strings in macro variables
proc sql noprint;
select orig, repl, count(*)
into :orig separated by ";",
:repl separated by ";",
:nrepl
from conversions;
quit;
Step 3: Do the actual conversion
Just to show how it works, let's deal with just one column.
data convert(drop=i re);
myString = "ç ğı òö ë, è";
do i = 1 to &nrepl;
re = prxparse("s/" || scan("&orig",i,";") || "/" || scan("&repl",i,";") || "/");
myString = prxchange(re,-1,myString);
end;
run;
Resulting myString: "c gl oo e, e"
To process all character columns, we use an array
Say your table is named mySource and you want all character variables to be processed; we'll create a vector called cols for that.
data convert(drop=i re);
set mySource;
array cols(*) _character_;
do c = 1 to dim(cols);
do i = 1 to &nrepl;
re = prxparse("s/" || scan("&orig",i,";") || "/" || scan("&repl",i,";") || "/");
cols(c) = prxchange(re,-1,cols(c));
end;
end;
run;
When changing single characters TRANSLATE is the proper function, it will be one line of code.
translated = translate(string,"cgio","çğıö");
First get all your columns from dictionary, and then replace the values of all of them in a macro do loop.
You can try a program like this (Replace MYTABLE with your table name):
proc sql;
select name , count(*) into :columns separated by ' ', :count
from dictionary.columns
where memname = 'MYTABLE';
quit;
%macro m;
data mytable;
set mytable;
%do i=1 %to &count;
%scan(&columns ,&i) = tranwrd(%scan(&columns ,&i),"ç","c");
%scan(&columns ,&i) = tranwrd(%scan(&columns ,&i),"ğ","g");
...
%end;
%mend;
%m;

In SAS, how do you collapse multiple rows into one row based on some ID variable?

The data I am working with is currently in the form of:
ID Sex Race Drug Dose FillDate
1 M White ziprosidone 100mg 10/01/98
1 M White ziprosidone 100mg 10/15/98
1 M White ziprosidone 100mg 10/29/98
1 M White ambien 20mg 01/07/99
1 M White ambien 20mg 01/14/99
2 F Asian telaprevir 500mg 03/08/92
2 F Asian telaprevir 500mg 03/20/92
2 F Asian telaprevir 500mg 04/01/92
And I would like to write SQL code to get the data in the form of:
ID Sex Race Drug1 DrugDose1 FillDate1_1 FillDate1_2 FillDate1_3 Drug2 DrugDose2 FillDate2_1 FillDate2_2 FillDate2_3
1 M White ziprosidone 100mg 10/01/98 10/15/98 10/29/98 ambien 20mg 01/07/99 01/14/99 null
2 F Asian telaprevir 500mg 03/08/92 03/20/92 04/01/92 null null null null null
I need just one row for each unique ID with all of the unique drug/dose/fill info in columns, not rows. I suppose it can be done using PROC TRANSPOSE, but I am not sure of the most efficient way of doing the multiple transposes. I should note that I have over 50,000 unique IDs, each with varying amounts of drugs, doses, and corresponding fill dates. I would like to return null/empty values for those columns that do not have data to fill in. Thanks in advance.
To some extent, the desired efficiency of this determines the best solution.
For example, assuming you know the maximum reasonable number of fill dates, you could use the following to very quickly get a transposed table - likely the fastest way to do that - but at the cost of needing a large amount of post-processing, as it will output a lot of data you don't really want.
proc summary data=have nway;
class id sex race;
output out=want (drop=_:)
idgroup(out[5] (drug dose filldate)=) / autoname;
run;
On the other side of things, the vertical-and-transpose is the "best" solution in terms of not requiring additional steps; though it might be slow.
data have_t;
set have;
by id sex race drug dose notsorted;
length varname value $64; *some reasonable maximum, particularly for the drug name;
if first.ID then do;
drugcounter=0;
end;
if first.dose then do;
drugcounter+1;
fillcounter=0;
varname = cats('Drug',drugcounter);
value = drug;
output;
varname = cats('DrugDose',drugcounter);
value = dose;
output;
end;
call missing(value);
fillcounter+1;
varname=cats('Filldate',drugcounter,'_',fillcounter);
value_n = filldate;
output;
run;
proc transpose data=have_t(where=(not missing(value))) out=want_c;
by id sex race ;
id varname;
var value;
run;
proc transpose data=have_t(where=(not missing(value_n))) out=want_n;
by id sex race ;
id varname;
var value_n;
run;
data want;
merge want_c want_n;
by id sex race;
run;
It's not crazy slow, really, and odds are it's fine for your 50k IDs (though you don't say how many drugs). 1 or 2 GB of data will work fine here, especially if you don't need to sort them.
Finally, there are some other solutions that are in between. You could do the transpose entirely using arrays in the data step, for one, which might be the best compromise; you have to determine in advance the maximum bounds for the arrays, but that's not the end of the world.
It all depends on your data, though, which is really the best. I would probably try the data step/transpose first: that's the most straightforward, and the one most other programmers will have seen before, so it's most likely the best solution unless it's prohibitively slow.
Consider the following query using two derived tables (inner and outer) that establishes an ordinal row count by the FillDate order. Then, using the row count, if/then or case/when logic is used for iterated columns. Outer query takes the max values grouped by id, sex, race.
The only caveat is knowing ahead how many expected or max number of rows per ID (i.e., another query our table browse). Hence, fill in ellipsis (...) as needed. Do note, missings will generate for columns that do not apply to a particular ID. And of course please adjust to actual dataset name.
proc sql;
CREATE TABLE DrugTableFlat AS (
SELECT id, sex, race,
Max(Drug_1) As Drug1, Max(Drug_2) As Drug2, Max(Drug_3) As Drug3, ...
Max(Dose_1) As Dose1, Max(Dose_2) As Dose2, Max(Dose_3) As Dose3, ...
Max(FillDate_1) As FillDate1, Max(FillDate_2) As FillDate2,
Max(FillDate_3) As FillDate3 ...
FROM
(SELECT id, sex, race,
CASE WHEN RowCount=1 THEN Drug END AS Drug_1,
CASE WHEN RowCount=2 THEN Drug END AS Drug_2,
CASE WHEN RowCount=3 THEN Drug END AS Drug_3,
...
CASE WHEN RowCount=1 THEN Dose END AS Dose_1,
CASE WHEN RowCount=2 THEN Dose END AS Dose_2,
CASE WHEN RowCount=3 THEN Dose END AS Dose_3,
...
CASE WHEN RowCount=1 THEN FillDate END AS FillDate_1,
CASE WHEN RowCount=2 THEN FillDate END AS FillDate_2,
CASE WHEN RowCount=3 THEN FillDate END AS FillDate_3,
...
FROM
(SELECT t1.id, t1.sex, t1.race, t1.drug, t1.dose, t1.filldate,
(SELECT Count(*) FROM DrugTable t2
WHERE t1.filldate >= t2.filldate AND t1.id = t2.id) As RowCount
FROM DrugTable t1) AS dT1
) As dT2
GROUP BY id, sex, race);
Here's my attempt at an array-based solution:
/* Import data */
data have;
input #2 ID #9 Sex $1. #18 Race $5. #31 Drug $11. #44 Dose $5. #58 FillDate mmddyy8.;
format filldate yymmdd10.;
cards;
1 M White ziprosidone 100mg 10/01/98
1 M White ziprosidone 100mg 10/15/98
1 M White ziprosidone 100mg 10/29/98
1 M White ambien 20mg 01/07/99
1 M White ambien 20mg 01/14/99
2 F Asian telaprevir 500mg 03/08/92
2 F Asian telaprevir 500mg 03/20/92
2 F Asian telaprevir 500mg 04/01/92
;
run;
/* Calculate array bounds - SQL version */
proc sql _method noprint;
select DATES into :MAX_DATES_PER_DRUG trimmed from
(select count(ID) as DATES from have group by ID, drug, dose)
having DATES = max(DATES);
select max(DRUGS) into :MAX_DRUGS_PER_ID trimmed from
(select count(DRUG) as DRUGS from
(select distinct DRUG, ID from have)
group by ID
)
;
quit;
/* Calculate array bounds - data step version */
data _null_;
set have(keep = id drug) end = eof;
by notsorted id drug;
retain max_drugs_per_id max_dates_per_drug;
if first.id then drug_count = 0;
if first.drug then do;
drug_count + 1;
date_count = 0;
end;
date_count + 1;
if last.id then max_drugs_per_id = max(max_drugs_per_id, drug_count);
if last.drug then max_dates_per_drug = max(max_dates_per_drug, date_count);
if eof then do;
call symput("max_drugs_per_id" ,cats(max_drugs_per_id));
call symput("max_dates_per_drug",cats(max_dates_per_drug));
end;
run;
/* Check macro vars */
%put MAX_DATES_PER_DRUG = "&MAX_DATES_PER_DRUG";
%put MAX_DRUGS_PER_ID = "&MAX_DRUGS_PER_ID";
/* Transpose */
data want;
if 0 then set have;
array filldates[&MAX_DRUGS_PER_ID,&MAX_DATES_PER_DRUG]
%macro arraydef;
%local i;
%do i = 1 %to &MAX_DRUGS_PER_ID;
filldates&i._1-filldates&i._&MAX_DATES_PER_DRUG
%end;
%mend arraydef;
%arraydef;
array drugs[&MAX_DRUGS_PER_ID] $11;
array doses[&MAX_DRUGS_PER_ID] $5;
drug_count = 0;
do until(last.id);
set have;
by ID drug dose notsorted;
if first.drug then do;
date_count = 0;
drug_count + 1;
drugs[drug_count] = drug;
doses[drug_count] = dose;
end;
date_count + 1;
filldates[drug_count,date_count] = filldate;
end;
drop drug dose filldate drug_count date_count;
format filldates: yymmdd10.;
run;
The data step code for calculating the array bounds is probably more efficient than the SQL version, but it's also bit more verbose. On the other hand, with the SQL version you also have to trim whitespace from the macro vars. Fixed - thanks Tom!
The transposing data step is probably also at the more efficient end of the scale compared to the proc transpose / proc sql options in the other answers, as it makes only 1 further pass through the dataset, but again it's also fairly complex.

sas macro index or other?

I have 169 towns for which I want to iterate a macro. I need the output files to be saved using the town-name (rather than a town-code). I have a dataset (TOWN) with town-code and town-name. Is it possible to have a %let statement that is set to the town-name for each iteration where i=town-code?
I know that I can list out the town-names using the index function, but I'd like a way to set the index function so that it sets a %let statement to the TOWN.town-name when i=TOWN.town-code.
All the answers below seem possible. I have used the %let = %scan( ,&i) option for now. A limitation is that the town names can be more than one word, so I've substituted underscores for spaces that I correct later.
This is my macro. I output proc report to excel for each of the 169 towns. I need the excel file to be saved as the name of the town and for the header to include the name of the town. Then, in excel, I merge all 169 worksheets into a single workbook.
%MACRO BY_YEAR;
%let townname=Andover Ansonia Ashford Avon ... Woodbury Woodstock;
%do i = 1999 %to 2006;
%do j = 1 %to 169;
%let name = %scan(&townname,&j);
ods tagsets.msoffice2k file="&ASR.\Town_Annual\&i.\&name..xls" style=minimal;
proc report data=ASR nofs nowd split='/';
where YR=&i and TWNRES=&j;
column CODNUM AGENUM SEX,(dths_sum asr_sum seasr_sum);
define CODNUM / group ;
define agenum / group ;
define sex / across ;
define dths_sum / analysis ;
define asr_sum / analysis ;
define seasr_sum / analysis ;
break after CODNUM / ul;
TITLE1 "&name Resident Age-Specific Mortality Rates by Sex, &i";
TITLE2 "per 100,000 population for selected causes of death";
run;
ods html close;
%end;
%end;
%MEND;
My guess is that the reason why you want to look up the town name by town index is to repeatedly call a macro with each town name. If this is the case, then you don't even need to get involved with the town index business at all. Just call the macro with each town name. There are many ways to do this. Here is one way using call execute().
data towns;
infile cards dlm=",";
input town :$char10. ##;
cards;
My Town,Your Town,His Town,Her Town
;
run;
%macro doTown(town=);
%put Town is &town..;
%mend doTown;
/* call the macro for each town */
data _null_;
set towns;
m = catx(town, '%doTown(town=', ')');
call execute(m);
run;
/* on log
Town is My Town.
Town is Your Town.
Town is His Town.
Town is Her Town.
*/
If you do need to do a table lookup, then one way is to convert your town names into a numeric format and write a simple macro to retrieve the name, given an index value. Something like:
data towns;
infile cards dlm=",";
input town :$char10. ##;
cards;
My Town,Your Town,His Town,Her Town
;
run;
/* make a numeric format */
data townfmt;
set towns end=end;
start = _n_;
rename town = label;
retain fmtname 'townfmt' type 'n';
run;
proc format cntlin=townfmt;
run;
%macro town(index);
%trim(%sysfunc(putn(&index,townfmt)))
%mend town;
%*-- check --*;
%put %town(1),%town(2),%town(3),%town(4);
/* on log
My Town,Your Town,His Town,Her Town
*/
Or how about you just pass both the code and the name to the macro as parameters? Like this?
%MACRO DOSTUFF(CODE=, NAME=);
DO STUFF...;
PROC EXPORT DATA=XYZ OUTFILE="&NAME."; RUN;
%MEND;
DATA _NULL_;
SET TOWNS;
CALL EXECUTE("%DOSTUFF(CODE=" || STRIP(CODE) || ", NAME=" || STRIP(NAME) || ");");
RUN;