I read an entry on designing a flexible data model. I work in a company where a variation of the model described there has been implemented using EJBs. We have Sybase and Weblogic, and we experience deadlocks and other such problems day-by-day when many transactions requiring a repetitive access to the same tables occur simultaneously. I was wondering if these problems were exclusively ours because of our DBMS and overall system, or if in fact such things can be expected with a model like this considering continuous transactions. The idea seemed great at the beginning but now, it seems like the cost of adding tables and/or columns might be quite lower than the cost of maintaining the system as it is now. What do you think?
Deadlocks and "lunchtime locks" are pretty much a "fact of life" for any database application that allows any interaction with the user during the duration of a transaction. While this coding style is very simple to teach and use, it isn't scalable and probably never will be.
PowerBuilder (and to a certain extent Delphi/Turbo-Pascal) really made optimistic locking workable for real world programming. Optimistic locking can be resource intensive (both RAM and bandwidth) under the worst case scenario, but these environments made it possible to design applications that are simple to write and still scale well.
Adding tables and/or columns simply spreads the locking problems around so they are less noticeable. This dilutes the problem, but doesn't make any real attempt to solve it. When this problem finally does come to a head, it will be MUCH worse than it is now.
Optimistic locking is a relatively simple idea, not tough to code, and scales well. The basic idea is to store a copy of the primary key for a given row, along with at least two copies of any columns you might modify. One copy is left "pristine" as it was read from the database, the other can be modified as needed.
In optimistic locking, when the row is flushed (which should happen automagically when it is ready for garbage collection), the underlying framework can compare the pristine copy with the current working copy. If any values are different between "pristine" and working copies, the row needs to be written to disk. The "pristine" value can then be compared to the value currently on disk, if there is no difference, the new working values can be safely written to disk, otherwise the code needs to decide how to handle this conflict (ignore the difference and overwrite the value on disk, abort the write and lose the changes in memory, or find a way to resolve the difference).
The net effect of using optimistic concurrency is that users who "hit and run" (get their data, make their changes, then save them) run little or no risk of concurrency problems. Users who "graze" and sit on data for hours at a time run significant risk, but any consequences of that choice only affect these users. Neither forces the database to maintain large or long locks, so the system can scale as needed.
For More Information
- Dozens more answers to tough database design questions from Pat Phelan
- The Best Database Design Web Links: tips, tutorials, scripts, and more
- Have an Oracle or SQL tip to offer your fellow DBAs and developers? The best tips submitted will receive a cool prize. Submit your tip today!
- Ask your database design questions -- or help out your peers by answering them -- in our live discussion forums.
- Ask the Experts yourself: Our SQL, database design, Oracle, SQL Server, DB2, metadata, object-oriented and data warehousing gurus are waiting to answer your toughest questions.
Dig Deeper on Oracle database performance problems and tuning
Related Q&A from Pat Phelan
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.