Q

Deadlocks occuring when many transactions on same tables occur simultaneously

This Content Component encountered an error

I read an entry on designing a flexible data model. I work in a company where a variation of the model described there has been implemented using EJBs. We have Sybase and Weblogic, and we experience deadlocks and other such problems day-by-day when many transactions requiring a repetitive access to the same tables occur simultaneously. I was wondering if these problems were exclusively ours because of our DBMS and overall system, or...

if in fact such things can be expected with a model like this considering continuous transactions. The idea seemed great at the beginning but now, it seems like the cost of adding tables and/or columns might be quite lower than the cost of maintaining the system as it is now. What do you think?

Deadlocks and "lunchtime locks" are pretty much a "fact of life" for any database application that allows any interaction with the user during the duration of a transaction. While this coding style is very simple to teach and use, it isn't scalable and probably never will be.

PowerBuilder (and to a certain extent Delphi/Turbo-Pascal) really made optimistic locking workable for real world programming. Optimistic locking can be resource intensive (both RAM and bandwidth) under the worst case scenario, but these environments made it possible to design applications that are simple to write and still scale well.

Adding tables and/or columns simply spreads the locking problems around so they are less noticeable. This dilutes the problem, but doesn't make any real attempt to solve it. When this problem finally does come to a head, it will be MUCH worse than it is now.

Optimistic locking is a relatively simple idea, not tough to code, and scales well. The basic idea is to store a copy of the primary key for a given row, along with at least two copies of any columns you might modify. One copy is left "pristine" as it was read from the database, the other can be modified as needed.

In optimistic locking, when the row is flushed (which should happen automagically when it is ready for garbage collection), the underlying framework can compare the pristine copy with the current working copy. If any values are different between "pristine" and working copies, the row needs to be written to disk. The "pristine" value can then be compared to the value currently on disk, if there is no difference, the new working values can be safely written to disk, otherwise the code needs to decide how to handle this conflict (ignore the difference and overwrite the value on disk, abort the write and lose the changes in memory, or find a way to resolve the difference).

The net effect of using optimistic concurrency is that users who "hit and run" (get their data, make their changes, then save them) run little or no risk of concurrency problems. Users who "graze" and sit on data for hours at a time run significant risk, but any consequences of that choice only affect these users. Neither forces the database to maintain large or long locks, so the system can scale as needed.

For More Information


This was first published in November 2002

Dig deeper on Oracle database performance problems and tuning

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchDataManagement

SearchBusinessAnalytics

SearchSAP

SearchSQLServer

TheServerSide

SearchDataCenter

SearchContentManagement

SearchFinancialApplications

Close