Guide to Oracle engineered systems and server appliances
A comprehensive collection of articles, videos and more, hand-picked by our editors
Oracle Fusion Middleware VP of development Adam Messinger details what he thinks are the benefits of Oracle Exalogic, the company's integrated WebLogic and Java server appliance.
Read the full text transcript from this video below. Please note the full transcript is for reference only and may include limited inaccuracies. To suggest a transcript correction, contact firstname.lastname@example.org.
Oracle Fusion Middleware VP breaks down Oracle Exalogic
Adam Messinger: We are pretty proud of what we got done.
I think the interesting thing there is when we started putting
it together, I was not actually convinced about how much
performance benefit we would get, nor how much savings
we thought there would be in putting together an integrated
system. As we have done it, though, I have been pleasantly
surprised on both fronts. There is a huge amount of performance
optimization that is possible in the silliest possible way of
reducing friction between layers. Just making sure that the
kernel parameters in the OS are lined up with the tuning in the
VM, lined up with the tuning in the web logic, that by itself is, it
depends on where it loads, but it is tens of percents. On top of
that, we have a bunch of technical work to take advantage of
InfiniBand, which is a really interesting technology in there that
hardly anyone has been able to get to work on their own.
It can maybe work in some high performance computing situations,
but it is a really complex technology. The value of us pre-integrating
it, assuming by the value of InfiniBand, the value of us pre-integrating
it is actually pretty high, because it is hard to get working, and hard
to get working well. In terms of its actual technical value, InfiniBand
is easy to compare to Ethernet, but really, it's a lot different than
Ethernet, in that it looks a lot more like a bus than an
The reason that is important to us is that it lets us hook a
bunch of relatively inexpensive computers together into
something that acts like one single computer, so for running
a cluster of middleware, this is really great, because it lets us
bring down failure detection times. When one node fails, we
can detect that really quickly, like, sub-millisecond quickly.
We know with high probability that it is actually dead, unlike
with the Ethernet, where you have to rely on timeouts, and
you do not know if packets are getting dropped, or whatever
is going on. In addition, because they have all of these User
LAN network protocols built on there, we get to bypass the
kernel. The reason that is important is because there are a lot
less contact switches, and there are latencies of our message
passing from, say on coherence, when you are replicating data
from node A to node B. That replication happens a lot faster,
because rather than going from node A, user space, down
through the kernel, across the wire, up through the kernel, in
that process, you are waiting for scheduling to happen on three
times, you are blowing out your CPU caches while the contact
switch happens. The latencies are not terrible, we have spent a
lot of time making coherence work great on the Ethernet, but it
is 10X or better on InfiniBand.