How to Make Performance Testing More Efficient
Flexible Data Model Design
How to grow your project without failing at architecture, scalability and system integration
Fast creation of performance tests based on SoapUI project
How to Improve Performance of AWS Java Cloud App? Try ElastiCache
How You Can Use Cassandra in the Big Data World
Harnessing the chaos in software project management
Working in a Virtual Team
Database: How To Make a Bad Thing Work Well
Smart Version Control for Project Managers and Decision Makers

High Load Java App Server: Things to consider when building

Focusing on the front end demand is where many developers start scoping a project. This coupled with the backend requirements often point to the most obvious choices for the application architecture. As we’ll see it’s often useful to do a bit more digging and mining before settling on a particular design plan. There are many pitfalls that can be avoided when the proper time and consideration is put in to technology and architecture selection. Some questions worth asking include: “What do we mean by high load?”, “Will we ever need to scale?”, “Are there predetermined hardware and/or operating system requirements?”, “What demands are being placed on the backend?”, “Is code base maintenance a priority?”, “Will the code base ever need to be ported to new platforms?”, “Is this a one off deployment? While your gut may be screaming high load javaapplication server a little preliminary consideration will pay off big time in the long run to validating your decision and/or help you defend a different path.

high load java app server internet to web server to JBoss AS

A typical High Load Java Application Server Architecture (courtesy of Newcircle.com)

What Does High Load mean?

When a front end requirement gets labeled as “high load” it’s well worth understanding what that might mean. Mischaracterizations can lead to dropped requests. Server hits are the metric most often sited but this number or range is usually not the whole story. First off the overhead requirements of server hits can vary dramatically. In the case of serving up static web pages server hits may be low cost and so high frequency hits may be easily handled. Should web hits require something more involved – credit card processing for example – processing each hit may be rather expensive and “high load” takes on a completely different meaning. Other considerations include hit ranges and long term stability. If hits arrive in bursts it may be possible to buffer them but care must be taken to ensure that pushing off real time processing will not simply postpone or mask reaching processing limits.

If your initial thought is “high load” it implies that you expect a fairly consistent volume of transactions that don’t vary significantly or vary predictably over a cycle of time. You expect to be able to predict/tune your demand so precisely that you are neither underutilizing nor overwhelming your server capabilities. This means you have great certainty that you can back up your expected traffic demands with empirical calculations. If this is not the case you may need to design in dynamic load balancing and the ability to instantiate and wind down application servers as your demand fluctuates. If spikes are possible then requests will need to be able to be buffered and thus dealing with the response delays while the back log is processed.

There are two ways to expand your processing power should demand exceed the default configuration. The system can be expanded “vertically” by adding additional processing resources to the existing server. This can be accomplished by increasing cpu share and memory on a partitioned system or by upgrading the hardware to support the increased demands on the server. It can also be accomplished by expanding “horizontally” as in adding additional servers and machines to accommodate the increased load. In this latter case load balancing requests on the front end as well as anticipating management of database, legacy systems, or other external resources requires anticipating this eventuality in the architecture and design stages. A bit of forethought will be greatly appreciated when production systems are struggling under unexpected demand. If expansion is a certainty or demand is likely to fluctuate wildly, horizontal expansion can be designed to be dynamic. Additional servers and hardware can be brought on and off line as needed in a carefully designed system.

Is implementing your application server in Java the right choice?

So why would you chose Java for your High Load Java App Server? The most likely answer is that it is the language or tools you have the most experience with. It’s not a poor choice in most cases but there are situations where Java is the best choice for your application server and situations where it is well worth considering other alternatives.

Java is an interpreted language which means that a single code base can be written and ported to any hardware that has a Java Virtual Machine (JVM). The JVM provides a standard interface to the application server and implements machine dependent calls most efficiently for the underlying hardware. This is especially useful if your High Load Java App Server is going to be run in many different hardware environments or if you want to support a customer base without requiring specific hardware configurations. An application server written entirely in Java also has maintenance advantages as you will be able to update and maintain a less divergent code base. It will also be able to be ported to future as yet unknown hardware platforms assuming they support a JVM.

Performance used to be a serious drawback because the JVM introduced an extra computational layer to the system but advances in hardware support for the JVM and the evolution of JVM implementations have improved to the point where the cost of the intermediary layer is negligible. There are specific cases where Java may not be the best choice especially for complex operations particularly if they involve floating point calculations. In such cases a more efficient language that’s closer to the hardware and better able to take advantage of its specific capabilities like C or C++ might be a better choice.

Anatomy of a Java Programmer

Specific Pitfalls to be Avoided

Servers need to manage several different tasks. Request Rates, Session Management, Throughput, Connection Rates, and Latency, can all present challenges to performance and capacity. A good design and architecture depends upon properly characterizing these and other challenges. Request rates are usually the driver for everything else. As discussed earlier properly characterizing the nature, frequency, and volatility of request rates is essential to properly sizing your server requirements in order to ensure proper system availability and reliability. Session management includes budgeting for many of the other items in this list. Requests, processing and any specialized calculations, connection rates, and latency will all figure in to properly sizing your server capability needs. Throughput is a calculation for how many requests you’ll be able to process at once and how long they’ll take. The more complex computations and the more servicing each session requires the higher the overhead on the server will be per session. Overhead and management can easily surpass computational calculations so it is very important to understand how sessions are dependent on the other components of the system. Connection rates and overhead are another often overlooked area of interest here. Database proximity and even underlying servicing such as garbage collection can severely impact expected performance and need to be sought out, recognized, and designed for. Connection latency also must be accounted for. Bandwidth will be anticipated and accounted for in the design but poorly understood interfaces will often introduce unexpectedly lengthy wait times. It is not uncommon to have to interface with legacy systems where latency is unavoidable. Designing systems that anticipate and monitor for exceeding capacity or lost availability and then elegantly dealing with the situation is essential to building a robust high availability java app server.

The Advantages of Prototyping your High Load Java App Server

Before beginning full scale implementation of any project it is worth do some back of the envelope calculations. This forces you to get specific with your assumptions and expectations. Anything that is not part of a user requirements document or other specification should be noted and circulated to those who understand what was specified. When you consider the various parts of the system you can make educated guesses about where your bandwidth and processing bottlenecks will be. At this point you can check your assumptions and develop a list of questions to be answered. The next step is to prototype those parts of the system to simulate the expected operation in a fully functioning system. For instance you can time a pre-populated list of database calls, flood your front end processor with requests until you reach its breaking point, and test worst case application calculations against throughput and memory usage. Don’t forget testing actual requests to external components such as data servers, legacy systems, and ancillary hardware. These often perform very differently in practice than their specifications.

By taking these prototyping steps before you commit to an architecture and especially before you begin coding you will either verify your assumptions or uncover pitfalls before they need workarounds. So many developers design and then implement a complete system before all their unknowns are surfaced. Attempting to develop workarounds and optimizations after uninformed implementation blows budgets of both time and money and rarely results in a quality product that is easy to maintain and delivers on expectations.

There’s a lot to think about!

It rarely makes sense to approach green field server designs with specific architectures and technologies in mind. It’s not a bad idea to start with a straw man and see how it holds up under scrutiny as the characterization of the implementation makes itself known. In the real world this is often a luxury. Existing server technology, IT’ s desire to limit support requirements to a particular stable of hardware and software, client requests, and the talents of the development team often are major factors in picking a development direction. If you find yourself challenged by all these elements be careful of the path you plot. You may very well want to charge ahead with a high load java app server and it may very well be the right choice. But make sure you don’t commit until you’ve done your due diligence. Choices at this stage have far reaching consequences. Decisiveness and expediency now may lead to lengthy design cycles and heavy maintenance burdens down the road.

There are also those that claim the era of high end java app servers is dead. Don’t fret, for the foreseeable future this architecture will be supported, stable, and viable. If you dare to abandon a server architecture for stand-alone applications it may be a wave of the future but pioneers may face a whole host of unexpected challenges and maintenance issues. The often elusive promise of greater simplicity and elegance may eventually win out but for the time being time tested and true presents a proven path. Good luck!

high load java app server mission impossible near servers

(courtesy Mission Impossible 4)

About the author

Anna Melkova
  • Maxim Drozdov

    One more thing to consider about environments: the testing should be done on the environment, that is similar to the wanted production environment.