Selecting from random amount of tables with identical structure

I am not proud of publicly writing something like this.

Sample + data:

CREATE TABLE donaldduck ( SURNAME VARCHAR(100), TOWN VARCHAR(100) );
CREATE TABLE daisyduck  ( SURNAME VARCHAR(100), TOWN VARCHAR(100) );
CREATE TABLE goofy      ( something number(1), SURNAME VARCHAR(100), TOWN VARCHAR(100) );

INSERT INTO DONALDDUCK ( SURNAME, TOWN ) VALUES ( 'Dagobert Duck', 'Entenhausen' );
INSERT INTO daisyduck ( SURNAME, TOWN ) VALUES ( 'Daisy Duck', 'Entenhausen' );
INSERT INTO goofy ( SOMETHING, SURNAME, TOWN ) VALUES (   1, 'Goofy Dog', 'Entenhausen' );
commit;

Types and PL/SQL using a pipelined function:

create type t_NAME_OF_USER as object
(
    SURNAME VARCHAR(100),
    TOWN VARCHAR(100)
);
/

create type t_name_of_user_tab IS TABLE OF t_NAME_OF_USER;
/

CREATE OR REPLACE FUNCTION get_surname_town RETURN t_name_of_user_tab PIPELINED AS
  rc sys_refcursor;
  query clob;
  l_surname varchar2(100);
  l_town varchar2(100);
begin
  for t in (
    select table_name from user_tables ut
    where
      'SURNAME' in (select column_name from user_tab_columns utc where utc.table_name = ut.table_name) and
      'TOWN'  in (select column_name from user_tab_columns utc where utc.table_name = ut.table_name)
    )
  loop
    open rc for 'select surname, town from ' || t.table_name ;
    loop
      fetch rc into l_surname, l_town;
      exit when rc%notfound;
      pipe row(t_NAME_OF_USER(l_surname, l_town));
    end loop;
  end loop;
end;
/

SELECT:

select * from table(get_surname_town);

SURNAME                        TOWN
------------------------------ --------------------
Daisy Duck                     Entenhausen
Dagobert Duck                  Entenhausen
Goofy Dog                      Entenhausen

I think people were having trouble understanding your question due to the table structure which is so bad it seems designed to give you a headache. As Balazs Papp indicates very little can be implemented that will scale or not look like something hacked together.

However there are solutions that can be done in PL/SQL. A pipelined table function will end up looking like a view. It will not scale to large numbers of users but won't look like a hack.

Another solution involves trading speed of execution for freshness of data. A lot of data is composed of ten percent active records and ninety percent archive data that is unlikely to change. If your data only has to be current once a day or once every few hours you could implement the pseudo code below as a packaged procedure and call a job to refresh the table.

---pseudo code, not intended to compile--

CREATE TABLE ADDRESS_BOOK(
ID NUMBER(9) NOT NULL,
NAME_OF_USER VARCHAR2(250) NOT NULL,
SURNAME VARCHAR(100) NOT NULL,
TOWN VARCHAR(100) NOT NULL);
  • create a sequence to fill the ID column
  • create a primary key on ID
  • create a trigger to automatically insert the ID from the sequence when the value inserted is null

--create your package

CREATE PACKAGE PKG_ADDRESS_BOOK
AS
PROCEDURE REFRESH;
END PKG_ADDRESS_BOOK;

CREATE OR REPLACE PACKAGE BODY PKG_ADDRESS_BOOK
 IS
PROCEDURE REFRESH
CURSOR the_tables IS
select table_name  
from user_tab_cols 
where column_name ='TOWN';

BEGIN
--clear out old data
DELETE FROM ADDRESS_BOOK;
FOR items in the_tables LOOP
EXECUTE IMMEDIATE('INSERT INTO ADDRESS_BOOK '||
'SELECT null,'
||items.table_name
||','
||'items.table_name.surname,items.table_name.town '
||'FROM '
||items.table_name);

END LOOP;

END REFRESH;

END PKG_ADDRESS_BOOK;

This is all predicated on the idea the data is slowly changing. Optimizations would include adding a method to only change the changed data.