Problem solve Get help with specific problems with your technologies, process and projects.

Using tcp.nodelay parameter for TCP/IP timeouts

I was reading about the "tcp.nodelay=yes" parameter (in sqlnet.ora) for TCP/IP timeouts. How do you use the snoop utility in Solaris (or any other) to measure the performance before and after implementing the parameter? Is there anyway to see whether TCP/IP is timing out? Please advise. The nodelay turns off the nagle algorithm (queuing of messages to fill packets); I'm not sure how it would affect timeouts. tcp.nodelay is not a performance feature. It alters the way packets are delivered on the network, thereby possibly affecting performance. It is recommended to not alter this parameter unless it is known what the outcome will be. Under certain conditions for some applications using TCP/IP, Oracle Net packets may not get flushed immediately to the network. Most often, this behavior occurs when large amounts of data are streamed. The implementation of TCP/IP itself is the reason for the lack of flushing, causing unacceptable delays. To remedy this problem, specify no delays in the buffer flushing process. This is not a SQL*Net feature, but rather the ability to set the persistent buffering flag at the TCP layer. tcp.nodelay activates and deactivates Nagle's Algorithm. More information on Nagle's Algorithm can be found in RFC 896.

Dig Deeper on Oracle database design and architecture

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.