In this article I would like to share my experience in using the multi-threaded server (MTS) feature to support 11,000+ concurrent database sessions. The application is developed in VC++ and deployed in the users' desktops that connect to the database (basic client-server architecture).
Today, most OLTP databases usually use multi-tier architecture to support hundreds and thousands of concurrent database sessions. Various multi-tier architecture solutions like Oracle Connection Manager, IBM WebSphere, BEA Web Logic, and SUN Iplanet are available. My shop has been using MTS for the last four years, right from version 18.104.22.168, 8.1.7.x to 22.214.171.124, and we are looking forward to configure MTS in Oracle 10g.
Although the MTS feature has been available as a product for some time, it was only in Oracle8i that it became an integral part of the database, followed by parameter changes and enhancements in Oracle9i.
What is MTS?
Multi-threaded server, also known as shared server, allows many user processes to share a few shared server processes to connect to the database. Without MTS, each user process spawns its own dedicated server process, consuming OS memory. A dedicated server process remains associated to the user process for the remainder of the connection.
Configuring MTS pre-spawns defined number of dispatcher and shared server processes, which are used by the user process for database connectivity.
Client - Database connectivity in 9i:
- Client makes request to Listener to connect to the dispatcher
- The listener hands the connection socket over to the dispatcher and an immediate link is thereby established between client and the dispatcher. This cuts down network messages and speeds up the connection process.
The advantage of MTS is that system overhead is reduced and less resources are used, allowing the number of users supported to be increased.
My observation is that MTS is good for small OLTP transactions and performance is slow for batch type transactions.
How to configure MTS
The following initialization parameters need to configured for MTS:
dispatchers = "(ADDRESS=(PROTOCOL=TCP)(HOST=server_name))(DISPATCHERS=n)" local_listener = "(ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=server_name)(PORT=1521)))" max_dispatchers=n shared_servers=n max_shared_servers=n where n is a numeric value.
In my production database, I have configured 50 dispatchers (300 sessions per dispatchers, as a thumb rule) and 350 shared server processes to support 11,000+ concurrent database sessions.
With max_dispatchers=1000 and max_shared_servers=3000, I can manually spawn dispatcher processes as required. Also, the shared server processes are automatically spawned and released based on the workload. Max_ parameters sets a maximum limit on the number of dispatchers and shared server processes. With these parameter values the database efficiently supports 11,000+ concurrent users.
With the above mentioned dispatcher parameters, when the database is started the dispatcher process randomly uses the available ports. It's different with every startup, which is a concern with respect to a firewall between the database and clients. The port needs to be opened at the firewall level to allow SQL*Net connections.
I do have one such scenario to overcome this issue. I configured dispatchers with predefined ports:
dispatchers="(ADDRESS=(PROTOCOL=TCP)(HOST=server_name1)(port=30001))(DISPATCHERS=1)" dispatchers="(ADDRESS=(PROTOCOL=TCP)(HOST=server_name1)(port=30002))(DISPATCHERS=1)" dispatchers="(ADDRESS=(PROTOCOL=TCP)(HOST=server_name1)(port=30003))(DISPATCHERS=1)"
The client connections are evenly distributed among the dispatcher processes.
Use "lsnrctl services LISTENER" to find the maximum number of connections a single dispatcher can handle, which is OS specific.
$ lsnrctl services LISTENER LSNRCTL for Solaris: Version 126.96.36.199.0 - Production on 16-SEP-2004 13:49:54 Copyright (c) 1991, 2002, Oracle Corporation. All rights reserved. Connecting to (ADDRESS=(PROTOCOL=TCP)(Host=server_name)(Port=1526)) Services Summary... Service "abc.world" has 1 instance(s). Instance "abc", status READY, has 50 handler(s) for this service... Handler(s): "D049" established:924 refused:0 current:209 max:972 state:ready DISPATCHER <machine: cceastprod0, pid: 19640> (ADDRESS=(PROTOCOL=tcp)(HOST=10.174.5.89)(PORT=51919))
Another aspect of implementing MTS is configuring LARGE_POOL area (LARGE_POOL_SIZE parameter). LARGE_POOL area is basically used to keep session information and also starting with Oracle9i sorting takes place in LARGE_POOL area. So as the number of concurrent sessions grows, LARGE_POOL usage grows. Configuring LARGE_POOL prevents Oracle from using SHARED_POOL to store session information and avoids shared pool fragmentation.
I have configured LARGE_POOL of size 9 GB, big enough to support large user base and avoid the ORA-4031 error. The ORA-4031 error prevents new database connections and it also errors out query executions.
db_cache_size = 1536M # 1.5 GB shared_pool_size = 1717986919 # 1.6 GB shared_pool_reserved_size = 314572800 # 300 MB log_buffer = 10485760 # 10 MB large_pool_size = 9663676416 # 9.0 GB
Performing batch processes
As mentioned earlier, MTS is good for small OLTP transactions and the performance is slow for batch transactions. There are a few modules in the application that require batch processing. In such a scenario I use a different database connect string to connect as DEDICATED session and perform batch execution.
Using SERVER=DEDICATED option in database connect string (tnsnames.ora file) initiates a DEDICATED connection in MTS environment.
abc.world = (DESCRIPTION = (ADDRESS_LIST= (ADDRESS = (PROTOCOL=TCP) (Host=server_name) (Port=1521))) (CONNECT_DATA = (SERVICE_NAME = abc.world) (SERVER=DEDICATED)) )
There are a couple of things to monitor with respect to MTS:
0. Number of MTS sessions – to check the client sessions growth to configure required number of shared server and dispatcher processes.
select count(username) "MTS Users" from v$session where server <> 'DEDICATED' /
1. LARGE_POOL usage –
select pool, name, round((bytes/1048576),0) "Free MB" from v$sgastat where pool like '%large%' / POOL NAME Free MB ----------- -------------------------- ---------- large pool free memory 6658 large pool session heap 2558
2. Dispatcher performance - wait times for client connections
select decode( sum(totalq), 0, 'No Responses', sum(wait)/sum(totalq) || 'hundredths of seconds') "Avg wait per response queue" from v$queue q, v$dispatcher d where q.type = 'DISPATCHER' and q.paddr = d.paddr / PADDR TYPE QUEUED WAIT TOTALQ AVG WAIT ---------------- ---------- ---------- ---------- ---------- ---------- 00 COMMON 0 1132534 48174473 .023509007 00000006312B0278 DISPATCHER 0 42327 1124933 .037626241 00000006312B0C88 DISPATCHER 0 37024 983252 .03765464 00000006312B1190 DISPATCHER 0 36981 963610 .038377559 00000006312B1698 DISPATCHER 0 40891 1094414 .037363374 00000006312AF868 DISPATCHER 0 38734 1043861 .037106473 00000006312B0780 DISPATCHER 0 40924 1081739 .037831677 00000006312B1BA0 DISPATCHER 0 40740 1089179 .03740432
AVG WAIT is the average wait (in hundredths of a second) per queued request.
3. Shared server efficiency – to check the shared server process activity/usage and can be deciding factor for adding/removing shared server processes based on % time busy.
select name, status, requests, (busy /(busy + idle)) * 100 "% of time busy" from v$shared_server order by (busy /(busy + idle)) * 100 desc / NAME STATUS REQUESTS % of time busy ---- ---------------- ---------- -------------- S000 WAIT(RECEIVE) 15060329 31.3371454 S001 EXEC 12422988 23.3545055 S002 WAIT(COMMON) 8741103 18.5982143 S003 EXEC 5636267 13.8615983 S004 WAIT(COMMON) 3242496 9.34363817 S005 WAIT(COMMON) 1694639 4.9032512 S006 WAIT(COMMON) 752524 2.55941631 S007 WAIT(COMMON) 312774 1.06611165 S212 WAIT(COMMON) 429 .994463775
Based on the "% of time busy" I add the shared server process. I started with 100-shared server processes two years ago, and now my database runs with 350 shared server processes.
MTS-related data dictionary views
- v$circuit -- user connections to the database through dispatchers and servers
- v$shared_server -- information on shared server processes
- v$dispatcher -- information on dispatcher processes
- v$mts -- information for tuning MTS
- v$queue -- information on multithreaded server queue
- v$session -- information for each current session
- v$dispatcher_rate -- information and statistics about the rate at which each dispatcher is receiving and handling messages, events, and so on.
Issues – ORA-7445 exceptions encountered core dump
So far, the only major issue I have seen with MTS is the dispatcher process getting abruptly killed. For some unknown reason the dispatcher process all of a sudden dies, killing the connected sessions. The following is an extract from the alert.log file for one such error:
Wed Sep 8 09:07:35 2004 Errors in file /u01/bdump/abc_d050_521.trc: ORA-07445: exception encountered: core dump [00000001026BA2D4] [SIGSEGV] [Address not mapped to object] [0x000000018]   Wed Sep 8 09:07:45 2004 found dead dispatcher 'D050', pid = (667, 207)
The dead dispatcher process is restarted once pmon cleans the in-doubt sessions. The workaround for this issue is to turn off the DCD (dead connection detection) feature. Set SQLNET.EXPIRE_TIME = 0 in sqlnet.ora file. This issue occurred in v188.8.131.52 and v184.108.40.206 and Oracle is currently working to resolve the same.