Feeds:
Posts
Comments

MOSC….

Change is always welcome. Its the sign that some thing is moving, some thing is getting improved. With change, there are always chances for new ideas to get born and groom. By constantly changing the way things are working, we always can make our work, life more better. To sum up, to grow, become more error-free, a change is required.
But as it is said, there is nothing always black or white. There are always exceptions to every rule. I mentioned above change as its need in our lives. Does it mean we should get married to a new person every other year to make our lives more better ? Heck no! And there are some rules which work forever. One out of those rules is, if some thing is not broken , don’t try to fix it. We need change but it should be done if there is a need to bring it. It should not be done just to make sure that no one sits idle and there should be some work for people to do, doesn’t matter its useless work too.
When OTN Forums( http://forums.oracle.com) got upgraded( or degraded, whatever you may want to call it, suit yourself) and the new point system got introduced, there was a huge hue and cry among many people. Some liked it , some hated it, some didn’t bother. Many blog posts were written talking about its pros/cons, many threads were created over Forums to understand/protest/suggest about it. I am not going to write another post talking about the same thing. I chose to be neutral about it. There is no need to shout over something which we can’t change. I like/dislike point system, it won’t make any difference to OTN team and in their decision to bring it or discard it. Infact, I guess with some exceptions like repeated posts to get points, people not bothering about giving points to right answers, this point system have proved to be good. I know couple of posters whose question count is in thousands. With the previous method of making someone a top user with only number of posts, they could had become top user as well with no benefit to others. They just ask questions, don’t contribute by helping others and that’s what a top user(s) should do. So with point system, only “noisy” people got sacked from the top user listing. Though its not the NYSE listing ,dropping from which can give some one a heart attack but still, now the list is having those who really spend time to give answers. Okay, there are still debates, arguements about some posters that they don’t do the right thing and have attained points by repeated posts. But I guess, there is no system which is totally error-free. So to sum up , this change over the OTN Forums was a welcome change, atleast for me( and if I don’t mention the sluggish performance of forums afterwards, another topic and may be another day).
But one ketchup doesn’t suit all the dishes. I was browsing over Twitter and I read a link which Eddie posted,

http://www.dba-oracle.com/oracle_news/news_oracle_support_community.htm

WTH! So it means that now over Metalink too, there will be point system? So what would be the metalink support analysts will do? What if some one who is not from Oracle support attains more points than anyone ?Should we start listening to him/her and leave what Support people say as they don’t have “enough points” to prove that they are right? There is so much debate that people are striving to become top users over OTN forums. Now what would happen, if some one from Support or not from Support would come and try the same?I have not seen the interface of this MOSC yet but if its the same like OTN than what if someone finds an answer from someone rude and reports it as abuse? Would that person be banned from Metalink as one can be from OTN? Not to forget about the requirement to use Flash to use the new interface! Some thing which didn’t require any fix got “fixed” and now everyone is in a “fix” what would happen next?
I don’t think that this was needed at all. There may be something better in someone’s mind who suggested , implemented all this. But I believe, some times , simple is better. I still like my normal Nokia8210 the most, doesn’t matter what Iphone has brought up? Not so modern may be I am!
Cheers
Aman….

Securefiles in Oracle 11gR1 is a complete reengineered thing. Traditional lobs didn’t present so well themselves in the long run of business due to the limiation of their sizes and other factors which are pretty well documented in Oracle docs. So Oracle has given us in 11g Securefiles, LOBs on steroids if I can say so.
Well this blog post is not about Securefiles. Official docs do a pretty good job in explaining their working and other stuff and there are tons of other web sites which have done that already a good job explaining them eg you can read Tim Hall’s excellent article about the same here . This blog post is about a parameter that Oracle has given for the better working of Securefiles, yup correct, Shared_io_pool.
Shared_io_pool is added in the Oracle architecture to support large IOs. Normally large IO’s and if I can say, sort of like private IOs are best done using PGA with direct path access. When Securefiles are created with the options Cache which is actually borrowed from older cousin Basicfile aka LOBs only , they are read into the Buffer Cache which makes the access of these lobs more faster. This option is not there in Securefiles actually and hence is the reason for the 10g doc link. Using Buffer Cache is a good option but it has couple of issues as well. First and foremost is that due to the large size of lobs, they may go and kick the heck out all other tiny miny buffers from the buffer cache. Now if we are really using Lobs so we won’t mind doing this too I guess. But still, this can be not-so-good issue for the other small lookup table’s buffers which are thrown out. Another issue can be when we actually don’t use the Cache at all and go with plain, Nocache option which means , no buffer cache access. So now zero chance of any memory based access for lobs.
In 11g, for Securefiles, Oracle has tried to remove this issue with the introduction of Shared_IO_Pool parameter. This can be used as a shared region to support cached IOs for Securefiles. This parameter’s default size is mentioned as zero but I couldn’t find it as zero. I found it of actually some size. I guess the benefit is clear cut. As opposed to private memory given for PGA and to be allocated , this is a system level stuff so its more easy to give to everyone.
Lets see this parameter, starting from “normal” views

SQL> select  * from V$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
PL/SQL Release 11.1.0.6.0 - Production
CORE    11.1.0.6.0      Production
TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
NLSRTL Version 11.1.0.6.0 - Production

SQL> select pool,name,bytes from V$sgastat where lower(name) like '%shared%';

POOL         NAME                            BYTES
------------ -------------------------- ----------
             shared_io_pool                4194304

shared pool  SHARED SERVERS INFO              3108
             generic process shared st          12
             array 2 for shared redo b          96
             array 1 for shared redo b          96
             ksfd shared pool recovery          24

6 rows selected.

SQL> startup force
ORACLE instance started.

Total System Global Area  188313600 bytes
Fixed Size                  1332048 bytes
Variable Size             134220976 bytes
Database Buffers           46137344 bytes
Redo Buffers                6623232 bytes
Database mounted.
Database opened.
SQL> select pool,name,bytes from V$sgastat where lower(name) like '%shared%';

POOL         NAME                            BYTES
------------ -------------------------- ----------
             shared_io_pool                4194304

shared pool  SHARED SERVERS INFO              3108
             generic process shared st          12
             array 2 for shared redo b          96
             array 1 for shared redo b          96
             ksfd shared pool recovery          24

6 rows selected.

SQL>

The default of this parameter's value in my system is coming out to be 4m. I am not sure that its actually correct or not. I shall install a vanila install of Oracle db and will recheck it. Let's try to see a little more deeper about this parameter.
I blogged about learning a new trick to find the info about the fixed table structures some time ago. Using the same, let's see what comes out for this parameter,

SQL> desc v$fixed_view_definition
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 VIEW_NAME                                          VARCHAR2(30)
 VIEW_DEFINITION                                    VARCHAR2(4000)

SQL> set long 500000
SQL> select view_definition from V$fixed_view_definition where view_name like 'V$SGASTAT';

VIEW_DEFINITION
--------------------------------------------------------------------------------
select  POOL, NAME , BYTES from GV$SGASTAT where inst_id = USERENV('Instance')
SQL> set long 500000000
SQL> select view_definition from V$fixed_view_definition where view_name like 'GV$SGASTAT';
VIEW_DEFINITION
--------------------------------------------------------------------------------
select inst_id,'',ksmssnam,ksmsslen from x$ksmfs where ksmsslen>1  union all  se
lect inst_id,'shared pool',ksmssnam, sum(ksmsslen) from x$ksmss    where ksmssle
n>1 group by inst_id, 'shared pool', ksmssnam  union all  select inst_id,'large
pool',ksmssnam, sum(ksmsslen) from x$ksmls    where ksmsslen>1 group by inst_id,
 'large pool', ksmssnam  union all  select inst_id,'java pool',ksmssnam, sum(ksm
sslen) from x$ksmjs    where ksmsslen>1 group by inst_id, 'java pool', ksmssnam
 union all  select inst_id,'streams pool',ksmssnam, sum(ksmsslen) from x$ksmstrs
    where ksmsslen>1 group by inst_id, 'streams pool', ksmssnam

So there are couple of fixed table structures which are involved here. The structure, X$KSMFS(Kernel Services , Memory Fixed SGA) shows the info of fixed value area in the sga which includes shared_io_pool also,

SQL> select ksmssnam,ksmsslen from x$ksmfs
  2  /

KSMSSNAM                     KSMSSLEN
-------------------------- ----------
fixed_sga                     1332048
buffer_cache                 46137344
log_buffer                    6623232
shared_io_pool                4194304

So the size shown here is also 4m only.
Let's see how many other parameters are there related to this?

SQL> select ksppinm,ksppstvl,ksppstdvl
  2   from x$ksppcv a,x$ksppi b
  3  where a.indx=b.indx and b.ksppinm like '%shared_io%';

KSPPINM                        KSPPSTVL                       KSPPSTDVL
------------------------------ ------------------------------ ----------------
__shared_io_pool_size          4194304                        4M
_shared_io_pool_size           4194304                        4M
_shared_iop_max_size           536870912                      512M
_shared_io_pool_buf_size       1048576                        1M
_shared_io_pool_debug_trc      0                              0
_shared_io_set_value           FALSE                          FALSE

6 rows selected.

SQL>

So I guess the default size is indeed 4m only. I may be wrong so if you know some thing correct about this, do let me know and I shall correct it. It seems that this parameter can go maximumly to 512m. I am still searching for the other parameter's description. Though look straight forward, I shall still wait to get the exact details about them before speaking about them.
Reading this article from Arup Nanda, it seems oracle has introduced some thing called Lob Cache specifically. Though I am not actually sure that there is some thing like this but it seems that this shared io pool is going to be lined with that in some way. It may be correct or may be not but this is my best guess about it so far.
It seems that Oracle has done some real thought-process before coming out with Securefiles. Let's see what else we would see about them in future.
Aman....

Whenever I used to find out the info of any fixed table(x$) I used to do this by a workaround. I used to set the trace, run the query and see the table’s name. For example, if we are looking for the V$log’s fixed table name so I would do something like that ,

<code>

SQL> select * from  V$log;

 

Execution Plan

----------------------------------------------------------

Plan hash value: 2536105608

 

--------------------------------------------------------------------------------

------------

 

| Id  | Operation                | Name            | Rows  | Bytes | Cost (%CPU)

| Time     |

 

--------------------------------------------------------------------------------

------------

 

|   0 | SELECT STATEMENT         |                 |     1 |   185 |     0   (0)

| 00:00:01 |

 

|   1 |  NESTED LOOPS            |                 |     1 |   185 |     0   (0)

| 00:00:01 |

 

|*  2 |   FIXED TABLE FULL       | X$KCCLE         |     1 |   136 |     0   (0)

| 00:00:01 |

 

|*  3 |   FIXED TABLE FIXED INDEX| X$KCCRT (ind:1) |     1 |    49 |     0   (0)

| 00:00:01 |

 

--------------------------------------------------------------------------------

------------

(output trimmed)
</code>
So I got that x$kccle and x$kccrt are driving it. Well not a bad way IMO.
Till today….
When I found this note, 
So there is actually a view which tells us the description of the fixed tables, cool! So doing the same from here, 
SQL> select view_definition
  2  from v$fixed_view_definition where view_name='V$LOG';
VIEW_DEFINITION
--------------------------------------------------------------------------------
select   GROUP# , THREAD# , SEQUENCE# , BYTES , MEMBERS , ARCHIVED , STATUS , FI
RST_CHANGE# , FIRST_TIME from GV$LOG where inst_id = USERENV('Instance')
SQL> select view_definition
  2  from v$fixed_view_definition where view_name='GV$LOG';
VIEW_DEFINITION
--------------------------------------------------------------------------------
select le.inst_id, le.lenum, le.lethr, le.leseq, le.lesiz*le.lebsz, ledup, decod
e(bitand(le.leflg,1),0,'NO','YES'), decode(bitand(le.leflg,24), 8, 'CURRENT',
                         16,'CLEARING',                            24,'CLEARING_
CURRENT',        decode(sign(leseq),0,'UNUSED',        decode(sign((to_number(rt
.rtckp_scn)-to_number(le.lenxs))*        bitand(rt.rtsta,2)),-1,'ACTIVE','INACTI
VE'))), to_number(le.lelos), to_date(le.lelot,'MM/DD/RR HH24:MI:SS','NLS_CALENDA
R=Gregorian') from x$kccle le, x$kccrt rt where le.ledup!=0 and le.lethr=rt.rtnu
m and  le.inst_id = rt.inst_id
SQL>
Voila! We got the fixed table’s names and much more info about how they are being used. Cool! Its always better to take a straight way rather than a workaround. Learned some thing new today :-).
Aman….

Errors are not really a welcome stuff ;) . Yesterday, I was importing a dump file (from 8.1.7.4) into 10.2.0.3 and hit

IMP-00020: long column too large for column buffer size (number).

After doing some googling and searching on metalink came to know (not sure though) that it was some bug in 8.1.7.4. And the solution was to try with different values of buffer parameter in import (which actually didn’t help) . Thats what oerr also has to say:

    *Cause: The column buffer is too small. This usually occurs when importing LONG data.
    *Action: Increase the insert buffer size 10,000 bytes at a time (for example). Use this step-by-step approach because a buffer size that is too large may cause a similar problem.

And also probably the issue was that the export was run with compress=N. So the guys again ran the export of the table with compress=Y (which is the default). Hopes got some oxygen and we again ran the import. That IMP-00020 was gone and a new baby struck:

IMP-00058:Oracle Error 1438 encountered.
ORA-01438:Value Larger Than Specified precision allow for this Column

Again started the googling and metalink’ing session and found that it was some bug :( . Import itself creating the table and then uttering ORA-01438. Complete non-sense. Isn’t it ? Just hitting and trying what we did was pre-created the table with all the NUMBER columns having their data types defined simply as NUMBER without any precision and wow it completed without any errors. Now its stupid seriously.

So then we did some research on the data in the table and found that there was one row which was causing the whole shit. There were two columns defined as NUMBER(12) but the values in them were of length 30 and 60. So what it was ? Probably a data corruption or what ? Otherwise how the table definition would have allowed such crap to enter the table. Couldn’t ascertain the exact reason but happy ending, it was ;)

Well this was not supposed to be a post but it was asked over Forums.oracle.com that can we do the transport of the tablespace in the same database after some testing?So the answer is yes. Now I was originally going to post it over there only but thanks to new Jive software, I couldn’t so I had to post it here. Have a read,

C:\Documents and Settings\Administrator>sqlplus "/ as sysdba"

SQL*Plus: Release 9.2.0.1.0 - Production on Wed Sep 17 09:31:00
2008

Copyright (c) 1982, 2002, Oracle Corporation. All rights
reserved.

Connected to:
Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production

SQL> select name from V$database;

NAME
---------
MUMMYORA

SQL> create tablespace test_tt datafile 'd:\test.dbf' size 2m;

Tablespace created.

SQL> create user test identified by test default tablespace
test_tt
quota un
ted on test_tt;

User created.

SQL> grant create table ,create session to test;

Grant succeeded.

SQL> conn test/test
Connected.
SQL> create table tt_tab(a number);

Table created.

SQL> insert into tt_tab values(1);

1 row created.

SQL> commit;

Commit complete.

SQL> conn / as sysdba
Connected.
SQL> alter tablespace test_tt read only;

Tablespace altered.

SQL> exit
Disconnected from Oracle9i Enterprise Edition Release 9.2.0.1.0
-
Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production

C:\Documents and Settings\Administrator>sqlplus "/ as sysdba"

SQL*Plus: Release 9.2.0.1.0 - Production on Wed Sep 17 09:33:44
2008

Copyright (c) 1982, 2002, Oracle Corporation. All rights
reserved.

Connected to:
Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production

SQL> alter user sys identified by oracle;

User altered.

SQL> exit
Disconnected from Oracle9i Enterprise Edition Release 9.2.0.1.0
-
Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production

C:\Documents and Settings\Administrator>sqlplus "sys/oracle as
sysdba"
file=
est_tt.dmp tablespaces=TEST_TT TRANSPORT_TABLESPACE=y
Usage: SQLPLUS [ [<option>] [<logon>] [<start>] ]
where <option> ::= -H | -V | [ [-L] [-M <o>] [-R <n>] [-S] ]
<logon> ::= <username>[/<password>][@<connect_string>] | /
|
/NOLOG
<start> ::= @<URI>|<filename>[.<ext>] [<parameter> ...]
"-H" displays the SQL*Plus version banner and usage
syntax
"-V" displays the SQL*Plus version banner
"-L" attempts log on just once
"-M <o>" uses HTML markup options <o>
"-R <n>" uses restricted mode <n>
"-S" uses silent mode

C:\Documents and Settings\Administrator>exp "sys/oracle as
sysdba"
file=d:\t
tt.dmp tablespaces=TEST_TT TRANSPORT_TABLESPACE=y
LRM-00108: invalid positional parameter value 'as'

EXP-00019: failed to process parameters, type 'EXP HELP=Y' for
help
EXP-00000: Export terminated unsuccessfully

C:\Documents and Settings\Administrator>exp 'sys/oracle as
sysdba'
file=d:\t
tt.dmp tablespaces=TEST_TT TRANSPORT_TABLESPACE=y

Export: Release 9.2.0.1.0 - Production on Wed Sep 17 09:35:09
2008

Copyright (c) 1982, 2002, Oracle Corporation. All rights
reserved.

Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 -
Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR
character set
Note: table data (rows) will not be exported
About to export transportable tablespace metadata...
For tablespace TEST_TT ...
. exporting cluster definitions
. exporting table definitions
. . exporting table TT_TAB
. exporting referential integrity constraints
. exporting triggers
. end transportable tablespace metadata export
Export terminated successfully without warnings.

C:\Documents and Settings\Administrator>sqlplus "/ as sysdba"

SQL*Plus: Release 9.2.0.1.0 - Production on Wed Sep 17 09:40:22
2008

Copyright (c) 1982, 2002, Oracle Corporation. All rights
reserved.

Connected to:
Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production

SQL> exit
Disconnected from Oracle9i Enterprise Edition Release 9.2.0.1.0
-
Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production

C:\Documents and Settings\Administrator>d:

D:\>mkdir d:\bkup

D:\>copy TEST.DBF d:\bkup
1 file(s) copied.

D:\>sqlplus "/ as sysdba"

SQL*Plus: Release 9.2.0.1.0 - Production on Wed Sep 17 09:40:50
2008

Copyright (c) 1982, 2002, Oracle Corporation. All rights
reserved.

Connected to:
Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production

SQL> alter tablespace test_tt read write;

Tablespace altered.

SQL> insert into test.tt_tab values(2);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from test.tt_tab;

A
----------
1
2

SQL> rem this was after export ;
SQL> drop tablespace test_tt including contents and datafiles;

Tablespace dropped.

SQL> exit
Disconnected from Oracle9i Enterprise Edition Release 9.2.0.1.0
-
Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production

D:\>imp 'sys/oracle as sysdba' file=d:\test_tt.dmp
tablespaces=TEST_TT
TRANS
_TABLESPACE=y datafiles='d:\test.dbf'

Import: Release 9.2.0.1.0 - Production on Wed Sep 17 09:43:03
2008

Copyright (c) 1982, 2002, Oracle Corporation. All rights
reserved.

Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 -
Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production

Export file created by EXPORT:V09.02.00 via conventional path
About to import transportable tablespace(s) metadata...
import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR
character set
. importing SYS's objects into SYS
IMP-00017: following statement failed with ORACLE error 1565:
"BEGIN
sys.dbms_plugts.beginImpTablespace('TEST_TT',12,'SYS',1,0,8192,1,3

"710620,1,2147483645,8,128,8,0,1,0,8,1151101578,1,1,35710182,NULL,0,0
,NULL,
"ULL); END;"
IMP-00003: ORACLE error 1565 encountered
ORA-01565: error in identifying file 'd:\test.dbf'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.
ORA-06512: at "SYS.DBMS_PLUGTS", line 1441
ORA-06512: at line 1
IMP-00000: Import terminated unsuccessfully

D:\>copy d:\bkup\TEST.DBF d:\
1 file(s) copied.

D:\>imp 'sys/oracle as sysdba' file=d:\test_tt.dmp
tablespaces=TEST_TT
TRANS
_TABLESPACE=y datafiles='d:\test.dbf'

Import: Release 9.2.0.1.0 - Production on Wed Sep 17 09:43:18
2008

Copyright (c) 1982, 2002, Oracle Corporation. All rights
reserved.

Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 -
Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production

Export file created by EXPORT:V09.02.00 via conventional path
About to import transportable tablespace(s) metadata...
import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR
character set
. importing SYS's objects into SYS
. importing TEST's objects into TEST
. . importing table "TT_TAB"
Import terminated successfully without warnings.

D:\>sqlplus test/test

SQL*Plus: Release 9.2.0.1.0 - Production on Wed Sep 17 09:43:26
2008

Copyright (c) 1982, 2002, Oracle Corporation. All rights
reserved.

Connected to:
Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production

SQL> select * from tt_tab;

A
----------
1

SQL>

What we have done is that we made a user Test owning a tablespace
Test_tt. We put one value in it(our original work) and we exported the  “good time” tablespace. We did some more work,inserted one more
value(can call it changes or testing) and once we finished,we dropped the
tablespace. We didn’t drop the user. We imported back the tablespace
and we were back to value 1 from where we started. More or less,same like tablespace attached and detached.

Aman….

Well Oracle is changing and I believe that there are alot changes which are happening “under-the-hood”. I just got a question over Forums that can Shared Pool shrink if we are using ASMM?Well the docs say no that wont be possible. Well the answer as per docs is No,its not possible that oracle will shrink the shared pool. Another fellow poster over forums, gave this link where Tanel Poder revealed one more “hidden” thing that Oracle from 10.2 onwards , is keeping Database buffer cache chunks in the Shared Pool heap. Now my first reponse what WTH! Why they would want to do that? But Tanel explains it well. I am posting the entire entry here with the reference. Things are not changing in a “big time”.
Well, believe or not, in addition to keeping private undo and redo buffers in shared pool, Oracle can nowadays hold some of the buffer cache there as well.

Sounds crazy? Check this!
SQL> select

2 s.ksmchptr SP_CHUNK,
3 s.ksmchsiz CH_SIZE,
4 b.obj DATAOBJ#,
5 b.ba BLOCKADDR,
6 b.blsiz BLKSIZE,
7 decode(b.class,
8 1,'data block',
9 2,'sort block',
10 3,'save undo block',
11 4,'segment header',
12 5,'save undo header',
13 6,'free list',
14 7,'extent map',
15 8,'1st level bmb',
16 9,'2nd level bmb',
17 10,'3rd level bmb',
18 11,'bitmap block',
19 12,'bitmap index block',
20 13,'file header block',
21 14,'unused',
22 15,'system undo header',
23 16,'system undo block',
24 17,'undo header',
25 18,'undo block',
26 class) BLKTYPE,
27 decode (b.state,
28 0,'free',1,'xcur',2,'scur',3,'cr', 4,'read',
29 5,'mrec',6,'irec',7,'write',8,'pi', 9,'memory',
30 10,'mwrite',11,'donated',b.state) BLKSTATE
31 from
32 x$bh b,
33 x$ksmsp s
34 where (
35 b.ba >= s.ksmchptr
36 and to_number(b.ba, 'XXXXXXXXXXXXXXXX') + b.blsiz <
to_number(ksmchptr, 'XXXXXXXXXXXXXXXX') + ksmchsiz
37 )
38 and s.ksmchcom = 'KGH: NO ACCESS'
39 order by s.ksmchptr, b.ba;
SP_CHUNK CH_SIZE DATAOBJ# BLOCKADDR BLKSIZE BLKTYPE BLKSTATE

---------------- ---------- ---------- ---------------- -------
-------------------- ----------
0000000387C01FE0 1269792 9001 0000000387C26000 8192 data block
xcur
9001 0000000387C28000 8192 data block
xcur
9001 0000000387C2A000 8192 data block
xcur
2 0000000387C2C000 8192 data block
xcur
9001 0000000387C2E000 8192 1st level
bmb xcur
9001 0000000387C30000 8192 2nd level
bmb xcur
9001 0000000387C32000 8192 segment
header xcur
4294967295 0000000387C34000 8192 36
xcur
4294967295 0000000387C36000 8192 36
xcur
51673 0000000387C38000 8192 data block
xcur
4294967295 0000000387C3A000 8192 36
xcur
4294967295 0000000387C3C000 8192 22
xcur
4294967295 0000000387C3E000 8192 22
xcur
37 0000000387C40000 8192 data block
xcur
4294967295 0000000387C42000 8192 22
xcur
4294967295 0000000387C44000 8192 30
xcur
4294967295 0000000387C46000 8192 30
xcur
4294967295 0000000387C48000 8192 30
xcur
573 0000000387C4A000 8192 data block
xcur

From matching SP_CHUNK and BLOCKADDR values you see that there are cache buffers which actually reside in shared pool heap.

When MMAN tries to get rid of a shared pool granule it obviously can’t just flush and throw away all the object in it. As long as anybody references chunks in this granule, it cannot be completely deallocated.

Oracle has faced a decision, what to do in this case: 1) wait until all chunks aren’t in use anymore – this might never happen 2) suspend the instance, relocate chunks somewhere else and update all SGA/PGA/UGA/CGA structures for all processes accordingly – this would get very complex
3) flush as many chunks from this shared pool granule as possible, mark them as “KGH: NO ACCESS” that nobody else would touch them, mark corresponding entry to DEFERRED in V$SGA_RESIZE_OPS and notify buffer cache manager, about the new memory locations being available for use.

Oracle has gone with option 3 as option 1 wouldn’t satisfy us and 2 would be very complex to implement, and it would mean a complete instance hang for seconds to minutes.

So, Oracle can share a granule between shared pool and buffer cache data. This sounds like a mess, but there is not really a better way to do it (if leaving the question, why the heck do you want to continuously reduce your shared pool size anyway, out).

This was tested on Oracle 10.2.0.2 on Solaris 10/x64

Tanel.
And the link is,

http://www.orafaq.com/maillist/oracle-l/2006/08/22/0958.htm

My head is spinning :-S.
Aman….

Today in my program, we hit with an almost uncertain error. We were trying to create listener using EM console but it was throwing us an error that the location is not the right one when the location was perfectly alright. I am not sure what actually is the error but despite the best effort, I couldn’t get any clue.
Now , this was not solved but in the meanwhile some one asked me that what is the use of NetProperties file which is available in $ORACLE_HOME/network/tools. This file is used by the network related tools, for example Net manager, Net Configuration Assistant etc. Despite searching alot, I couldn’t find anything about it either. Grrr!
Well , both the things didn’t lead to no where but some how in the search, I stumbled upon a workaround that we need to do these changes in the Netproperties file to make EM work,
1)Go to $oracle_home/nework/tools
2)In the file Netproperties, comment the line INSTALLEDCOMPONENTS=ORACLENET
And it did work. Now the million dollar question is what does this setting means and why did comment it? And the zillion dollar question( since the starting) remains yet a mystery that what does Netproperties file control?
Search continues….
Aman….

Follow

Get every new post delivered to your Inbox.