sql - Oracle ORA-04030 even when using Bind Variables in a LOOP -
i have delete 500 million rows remote table using pl/sql. since undo tablespace cannot handle volume, deletes being done in batches size of 1,000,000 , committed. - reduce hard-parsing using bind variables using following syntax:
str := 'delete table@link id >= :x , id < :y'; execute immediate str using start_id, start_id+1000000
and after every invocation, start_id incremented 1000000 till sql%rowcount returns 0 ( 0 ) , end_id ( known ) reached.
but process getting ora-0430 follows:
ora-04030: out of process memory when trying allocate 16408 bytes (qerhj hash-joi,qerhj bit vector) ora-04030: out of process memory when trying allocate 41888 bytes (kxs-heap-c,temporary memory)
note using bind variables there no hard parsing after first execution.
one thing range of id @ target. assuming first few rows in increasing order, ids
100,000,000,000 200,000,000,000 50,000,000,000,000,000 50,000,000,000,011,111
on second iteration, ids 200,000,000,000 200,000,100,000 deleted.
but since there no ids in range, take 50,000,000,000 iterations next row ( 50,000,000,000,000,000 / 1000000 = 50,000,000,000 ).
of course - can examine id target , choose correct range ( larger default 1 million ).
but should not case process run out of memory.
added code:
remote.sql : execute on remote :
create table test1 ( id number(38) primary key ); insert test1 select level dual connect level < 1000000; insert test1 values ( 1000000000000 ); insert test1 values ( 2000000000000 ); commit; exec dbms_stats.gather_table_stats ( ownname => user, tabname => 'test1', cascade => true, estimate_percent => 100 ); commit;
local.sql :
create or replace procedure batch_del l_min_val integer; l_max_val integer; l_cnt integer; l_cnt_dst integer; l_begin integer; l_end integer; l_str varchar2(1000); l_tot_cnt integer; pragma autonomous_transaction; begin l_tot_cnt := 0; l_str := ' select min(id), max(id), count(*) test1@dst'; execute immediate l_str l_min_val, l_max_val, l_cnt_dst; dbms_output.put_line ( 'min: ' || l_min_val || ' max: ' || l_max_val || ' total : ' || l_cnt_dst ); l_begin := l_min_val; while l_begin < l_max_val loop begin l_end := l_begin + 100000; delete test1@dst id >= l_begin , id < l_end; l_cnt := sql%rowcount; dbms_output.put_line ( 'rows processed : ' || l_cnt ); l_tot_cnt := l_tot_cnt + l_cnt; dbms_output.put_line ( 'rows processed far : ' || l_tot_cnt ); commit; exception when others dbms_output.put_line ( 'error : ' || sqlcode ); end; l_begin := l_begin + 100000; end loop; dbms_output.put_line ( 'total : ' || l_tot_cnt ); end;
**all local implementation **
drop table test1; create table test1 ( id number(38) primary key ); insert test1 select level dual connect level < 1000000; insert test1 values ( 1000000000000 ); insert test1 values ( 2000000000000 ); commit; exec dbms_stats.gather_table_stats ( ownname => user, tabname => 'test1', cascade => true, estimate_percent => 100 ); commit; create or replace procedure batch_del l_min_val integer; l_max_val integer; l_cnt integer; l_begin integer; l_tot_cnt integer; pragma autonomous_transaction; begin l_tot_cnt := 0; select min(id), max(id) l_min_val, l_max_val test1; l_begin := l_min_val; while l_begin < l_max_val loop begin delete test1 id >= l_begin , id < l_begin + 10000; l_cnt := sql%rowcount; dbms_output.put_line ( 'rows processed : ' || l_cnt ); l_tot_cnt := l_tot_cnt + l_cnt; dbms_output.put_line ( 'rows processed far : ' || l_tot_cnt ); commit; exception when others dbms_output.put_line ( 'error : ' || sqlcode ); end; l_begin := l_begin + 10000; end loop; dbms_output.put_line ( 'total : ' || l_tot_cnt ); end; set timing on; set serveroutput on size unli; exec batch_del;
this not answer, it's large fit in comment, here are!
there's no need dynamic sql. if you, i'd rewrite as:
create or replace procedure batch_del l_min_val integer; l_max_val integer; l_begin integer; l_end integer; l_rows_to_process number := 100000; pragma autonomous_transaction; begin select min(id), max(id), count(*) l_min_val, l_max_val, l_cnt_dst test1@dst; l_begin := l_min_val; while l_begin < l_max_val loop begin l_end := l_begin + l_rows_to_process; delete test1@dst id >= l_begin , id < l_end; dbms_output.put_line('rows deleted: '||sql%rowcount); commit; exception when others dbms_output.put_line('error : ' || sqlcode); end; l_begin := l_begin + l_rows_to_process; end loop; end; /
alternatively, if you've got non-consecutive id's, perhaps more performant you:
create or replace procedure batch_del type type_id_array table of number index pls_integer; l_min_id_array type_id_array; l_max_id_array type_id_array; l_rows_to_process number := 10000; pragma autonomous_transaction; begin select min(id) min_id, max(id) max_id bulk collect l_min_id_array, l_max_id_array (select --/*+ driving_site(t1) */ id, ceil((row_number() over(order id)) / l_rows_to_process) grp test1 t1) group grp order grp; in l_min_id_array.first..l_min_id_array.last loop begin delete test1 id between l_min_id_array(i) , l_max_id_array(i); dbms_output.put_line('rows deleted in loop '||i||': '||sql%rowcount); commit; exception when others -- hope there better way of logging error in -- production db; e.g. separate procedure writing log table. dbms_output.put_line('error in loop '||i||': ' || sqlcode); end; end loop; end batch_del; /
Comments
Post a Comment