Project

General

Profile

Threading Policy

Java Threading

What is a thread?

A thread (sometimes called a "lightweight process") is an additional line of execution in a given program or process. These additional lines of execution give the program an appearance of being able to perform more than one task at a time. If the program is executing on a computer that contains more than one processor, then the program could actually execute the threads simultaneously.

The reason threads are also called "lightweight processes" is that each thread executes in the same address space as the parent process. Context switches between threads are cheaper than between processes because the address spaces are not being swapped out. Additionally, because threads occupy the same address space, each thread has access to the same memory locations and variables within the parent process. If individual processes were used to implement a threaded model for a program, significant execution time would be lost due address space context switches and interprocess communication.

However, threads do have their set of issues that must be addressed in any multithreaded process. The most common issue encountered is called a "race" condition. As stated above, each thread within a given process has access to the same memory space and objects. A race condition occurs when multiple threads are attempting to access a shared object or resource and the program results or behavior depends on which thread is able to use the resource first. This condition can lead to corrupt data and is typically hard to track down. In addition to race conditions, multithreaded processes can be subject to deadlocks, synchronization errors, synchronization overhead, and contention.

What is the goal of this document?

This document has two main purposes. First this document will discuss some of the more common threading errors in multithreaded Java programming. Second, this document will develop the GIFT Threading Policy. This policy is designed to help insure the stability of the GIFT application as components are developed across the various task orders.

Common Java Threading Issues

Race Condition

A race condition is the classic threading problem. A race condition exists when two or more threads compete for the same memory location, variable, or resource. Should the current thread be preempted during modification of this shared resource by another thread, the resource can be left in an unknown or invalid state. This can result in unpredictable program behavior, variable or memory data corruption, or execution deadlock.

Synchronize & Locking

To help resolve this resource contention issue, a method was developed to control concurrent use of any shared resource. This method is commonly known as resource locking. Java handles locking through the implementation of the synchronized key word. In Java, all objects have a lock. A thread can acquire a lock on an object through use of the synchronized key word. When that lock has been acquired by a thread, no other thread in the process can enter a synchronized block of code for the same object.

Synchronize example:

public class BankAccount {

  double balance = 0.0;
  float interest = 1.23;

  ...

  public synchronized void computeInterest() {

    ...

    // Do interest calculations

    ...
  }

  public synchronized double withdrawal(double amount) {

    // Verify available funds

    ...

    balance -= amount;

    ...

    return balance;
  }
}

Figure 1
In the above example, the synchronized key word will prevent the withdrawal() and computeInterest() functions from occurring at the same time on the same BankAccount object. The idea is to prevent another thread from modifying the balance while the interest is being calculated. For additional information on locking, see the Locking / Synchronizing section for more details.

Memory Corruption

Memory corruption is a common result of uncontrolled or unregulated access to shared memory objects by multiple threads. Race conditions usually result in memory corruption. The synchronization example in figure 1 is an example of locking to prevent memory corruption. Without the synchronization locks, it would be possible for a withdrawal to occur while computing the interest. This could lead to the computeInterest function resetting the balance to an "old" pre-withdrawal value.

Deadlocks

Deadlocks are another result of synchronization errors in multithreaded programming. Deadlocks are typically easy to spot when they occur. When threads deadlock, the result will be the appearance of a frozen application or a partially frozen application. Let's assume that there are some code segments in a class that must obtain two locks to execute (lock A & lock B). While executing, thread #1 acquires lock A and at the same time thread #2 acquires lock B. If both threads then enter a synchronized block requiring both locks, the threads will deadlock. A deadlock will occur any time a thread is waiting for a lock that it will never receive.

The GIFT Locking and Synchronizing policy is discussed below along with tips to help prevent deadlocked threads. See Locking / Synchronizing for more details.

Thread Starvation

There are two thread scheduling models. They are preemptive and cooperative. Under preemptive scheduling, the underlying scheduler will provide a given thread with a predefined amount of CPU time. If the thread does not yield the CPU within that predefined time, the scheduler will interrupt the current thread and the next waiting thread will be given CPU time. Under the cooperative model, a thread will retain the CPU until it specifically releases it. Thread starvation is a common problem for cooperative scheduling. Thread starvation occurs when the current running thread fails to release the CPU or holds onto the CPU for long periods of time. When this occurs, the other threads will not receive CPU time and they will appear to "lockup."

It is good coding practice to code as if you are in a cooperative model. CPU intensive code segments should be as short as possible and be performed in worker / deamon threads that will enter a wait state for the next item or items to be processed. Threads that enter a wait or sleep, will automatically release the CPU to another thread.

Dead Processes

In Java, a process will not exit until all user threads have exited (with exception noted below). This behavior can lead to what seems to be a dead process under certain conditions. For example, assume that a Java application starts a worker thread for processing specialized requests. If an exception should occur in the main thread that causes the main thread to exit and the worker thread is not aware that the main thread has exited, the application will give the appearance that it is dead, and will not exit. In fact, the worker thread will wait forever to process requests that will never arrive. Care should be taken in event handlers to inform all dependent threads of error conditions. Additionally, worker threads should be marked as daemon threads. Daemon threads do not prevent the Java VM process from exiting. The VM will exit when all non-daemon threads have exited. Therefore, when the last non-daemon thread exits, all daemon threads will be shut down and the VM process exits. However, daemon threads can be abruptly halted. Care should be exercised in their use. See General Threading Issues / Error Handling for more details.

Deprecated Thread Functions

With the release of JDK 1.0, the Java Language Specification included a set of functions available to the Thread class. Those function were: Thread.stop(), Thread.suspend(), and Thread.resume(). These functions were deprecated by Sun because they were considered inherently dangerous. They could easily lead to corrupted memory and dead-locked situations. For a full description of those functions and acceptable workarounds to the deprecated functions, see the following document: Java Thread Primitive Deprecation

GIFT Threading Policy

The following sections will address the common threading issues described in the previous sections. This section is grouped into functional areas that compose the GIFT Threading Policy. The policy itself is denoted in the red bullet items at the start of each section. Following the policy, each section describes in greater detail the goals of the stated policy.

The Threading Policy utilizes terms such as should and shall. Policy items worded as should are policies that GIFT developers are highly encouraged to follow. These policies are recognized as good development practices. If circumstances require violating policy items worded as should, the reasons for the violation must be documented. This documentation shall be in the form of inline comments and/or rationale documents. However, policy items that are marked as “shall” will not be violated by developers without obtaining prior approval.

General Threading Issues

Exception handling should include signaling of dependent threads of failure in the parent thread.
Deprecated threading functions: stop(), suspend(), and resume() shall not be used.
The use of daemon threads should be limited.

Exception Handling

Within the Java Language Specification, a process or application will continue to execute until the last user thread has exited. In Java, the main thread is simply another user thread. Therefore, it is legal for the main thread to exit prior to spawned user threads. However, this can lead to a commonly overlooked aspect of working with threads. Special attention should be given to exception handling in multithreaded processes. For example, if the main thread should throw an exception and halt without informing the child threads of the exit condition, the remaining threads would continue to execute or sleep waiting for instructions. This would lead to what appears to be a dead Java process.

GIFT should properly trap and handle exceptions. This includes informing user threads of the error condition. This will allow the spawned user threads to safely cleanup and halt execution if needed.

Deprecated Functions

GIFT shall not use the deprecated Java Threading functions. For more information on these functions, see the Deprecated Thread Functions section.

Daemon Threads

The Java Language Specification supports two types of threads, user threads and daemon threads. Typically each thread spawned in Java is a user thread. Daemon threads can be created by setting a thread object as a daemon thread prior to thread execution. Therefore, setDaemon() must be called prior to start(). As stated in the Dead Process section above, a Java process will continue to execute until the last user thread has exited. A daemon thread will not prevent a Java process from exiting. For example, consider the following sequence:

Execution Sequence User Threads Daemon Threads
Java process starts 1 thread (main) 0 threads
Process creates additional user thread 2 threads 0 threads
Process creates two daemon threads 2 threads 2 threads
User thread exits 1 thread 2 threads
User thread exits 0 threads 2 threads
Java process exits 0 threads both daemon threads killed

As you can see from the sequence of events above, when the last user thread exits, the Java process will exit. When the Java process exits, any running daemon threads are killed. It is important to note that the daemon threads are not notified by the Java VM that execution will be halted. Therefore, when the daemon threads exit, they can leave processing in an unknown state.

Based on the fact that the daemon threads can be abruptly halted, daemon threads should be used only for tasks that can handle being halted in an unknown state. These may include quick repetitive user directed tasks, blocked I/O listeners, or listening on a DIS port for incoming simulation events. It is important to remember that daemon threads can abruptly terminate. Caution should be exercised in the use of daemon threads.

Locking / Synchronizing

Group locks or a locking sequence should be used to prevent deadlocking on individual locks.
Calls of wait() should be restricted to method locks.
(Note: A thread that has called wait() will not resume until notify() or notifyAll() is called or it times out (if timeout interval is specified.))

Java Locking

Concurrent threaded access to shared resources in Java is handled with the use of the synchronized key word / statement. This synchronized statement is used to obtain a lock. When a thread that has obtained a lock, only that thread can use the resource associated with that lock.

Java was developed with threading supported at the language level. As a result, every object in a Java process has an associated lock. For example, when a new Integer objected is created with:

          Integer anInteger = new Integer(10);

the anInteger object will have a lock associated with it. If a thread wishes to obtain the lock from anInteger, it can use the following block of code:

          synchronized(anInteger) {
            // Thread safe code.  Only one thread
            // can enter this code segment or any
            // other synchronize(anInteger) code
            // block.
          }

The first thread to enter into the synchronize block above will obtain the lock associated with the anInteger object. Any other thread that attempts to enter the same code block above, or any other synchronized(anInteger) block, will enter a wait state until the thread has the lock exits the synchronized block. The above lock is considered a fine grain lock. It represents a specific lock to an object.

Another lock type is a method lock. A method lock occurs when a method declaration contains the synchronized keyword. For example:

          public class Something {
            public synchronized void doSomething() {
              // method code
            }
          }

In the above example, when a thread enters the above method, the lock associated with the "Something" object is obtained. No other thread can use the same method or any other synchronized method within the same object. For example:

public class UseSomething extends Thread {

  Something s = new Something();

  public void run() {
    s.doSomething();
  }

  public static void main(String[] args) {
    UseSomething thread1 = new UseSomething();
    UseSomething thread2 = new UseSomething();

    thread1.start();
    thread2.start();
  }
}

Figure 2

public class UseSomething extends Thread {

  static Something s = new Something();

  public void run() {
    s.doSomething();
  }

  public static void main(String[] args) {
    UseSomething thread1 = new UseSomething();
    UseSomething thread2 = new UseSomething();

    thread1.start();
    thread2.start();
  }
}

Figure 3
In Figure 2, both threads will execute and finish. However, in Figure 3, one of the two threads may enter a wait state for the time it takes the other thread to finish executing in the doSomething() method. In Figure 3, due to the fact that s is declared static, the method lock on doSomething() call is acting on the same object. While in Figure 2, the object s is not declared static. Therefore, two Something objects exists, and the synchronized methods are acting on two different locks.

Lock Sequence / Group Locks

As stated in the Deadlocks section above, a common problem for multithreaded processes is deadlocking. This can occur any time a thread is waiting for a lock that will never be freed. One situation that can cause a deadlock to occur is when a segment of code requires more than one lock to enter. In addition, more than one thread has one of the individual locks for that code segment. Each thread will wait indefinitely for the other thread to release the lock.

Two techniques can be employed to help resolve this issue. First setup all synchronized blocks of code to obtain locks in the same predetermined order. This will prevent any one thread from obtaining one lock before another. The locking order should be determined by the component developers. Once determined, the locking order should be published such that any developer working on the same component consistently implements the locking order. Secondly, group locks can be employed for sections of code that require more than one lock. Under a group lock approach, a thread can only obtain the individual locks if it can first obtain the group lock.

Restrict wait() Calls

GIFT should restrict the use of wait() calls to synchronized methods and avoid calls to wait() within fine grained synchronized blocks of code. Since synchronized method calls only have one lock (the object the method is operating on) calling wait() will release that one lock. However, under fine grained locks, wait will not release all locks that the thread may have when wait() is called. This could result in a waiting thread that has retained a lock and causing another thread to deadlock. GIFT should restrict wait() calls to method locks only.

Thread Optimization

Reduce Contention

  • Synchronized blocks should be made as small as possible.
  • For repetitive quick entries into a single synchronized block (i.e. small loop accessing a thread safe collection), the entire loop should be synchronized.
  • Java 1.6 concurrent collections should be used to minimize the need for synchronized code blocks.
  • Lock granularity should be increased to lock unrelated data elements separately.
  • Synchronized code blocks that make blocking IO calls should be avoided.
  • Calls to yield() in a synchronized block should be avoided.

Synchronizing blocks of code increases the amount of overhead in executing that block of code. Simply stated, unsynchronized blocks of code do not incur processing overhead. Therefore, all thread safe operations should not be placed in synchronized code blocks. However, if multiple threads are going to be accessing shared resources, then chances are synchronized blocks of code will be necessary. Additionally, when a synchronized call is made, that call could be a contended call. A contended synchronized call is when an attempt is made to enter a synchronized block of code in which the lock has already been obtained by another thread. Under this condition, the thread requesting the lock must wait until that lock has been freed. This wait time rapidly increases the overhead for the call.

  1. Unsynchronized call (no overhead)
  2. Uncontended synchronized call (moderate overhead)
  3. Contended synchronized call (large overhead)

Small Synchronized Blocks

As a result, synchronized blocks of code should be optimized to reduce the chances of a contended synchronized call occurring. One of the quickest ways to reduce contended calls is to optimize the code within the synchronized blocks to execute as quickly as possible. The less time a thread retains a lock, the less likely another thread will encounter a contended call. One effective way to optimize a synchronized block is to verify that inherently thread safe code is not within the synchronized block.

However, care should be taken not to over optimize synchronized blocks. For example, consider the following blocks of code:

public class MyNumbers {

  Vector theVector = new Vector();

  public void add(Integer I) {
    synchronized(theVector) {
        theVector.add(I);
    }
  }

  public Integer getLargest() {
    int size;
    Integer theInt;

    synchronized(theVector) {
      size = theVector.size();
      theInt = theVector.get(0);
    }

    for(int i=1; i<size; i++) {
      synchronized(theVector) {
        if(theVector.get(i).intValue() > theInt.intValue()) {
          theInt = theVector.get(i);
        }
      }
    }
  }
}  

Figure 4

public class MyNumbers {

  Vector theVector = new Vector();

  public synchronized void add(Integer I) {
        theVector.add(I);
  }

  public synchronized Integer getLargest() {
    int size = theVector.size();
    Integer theInt = theVector.get(0);

    for(int i=1; i<size; i++) {
      if(theVector.get(i).intValue() > theInt.intValue()) {
        theInt = theVector.get(i);
      }
    }

  }
}  

Figure 5

Synchronized Loops

In attempt to limit the amount of time actually within the synchronized block, someone might be tempted to develop the code segment found in Figure 4. There are a few problems with this code. First, in attempting to make sure that only non-threadsafe code was within the synchronized block, the for loop occurs outside the synchronized block. If this vector is extremely large, significant overhead is created by repetitively entering the synchronized block for a small loop. For small looping structures, consider placing the entire loop within the synchronized block.

Additionally, there is a subtle logic error within the code in Figure 4. Since the loop itself is not within the synchronized block, it is possible for another thread to modify the length of the vector when the current thread is in the increment and test phase of the loop. This could lead to the race condition as discussed above.

Use Java 1.6 Concurrent Collections

Many times synchronized blocks of code are used to control concurrent access to Java collections. Starting with Java 1.5 (and expanded in Java 1.6), Sun began introducing concurrent collection classes. These collections have been optimized to provide thread safe concurrent access to the collection. This eliminates the need for synchronized blocks surrounding access to the collections. These collections can be found in the java.util and java.util.concurrent packages. For example, consider the following table for selecting a concurrent version of a collection.

Java Collection Concurrent Collection When to use
Map ConcurrentHashMap Typically do not block on read, and returns the most completed update operations.
List CopyOnWriteArrayList Thread-safe array list in which read / traversal operations outnumber mutations. Provides a "snapshot" style interator.
Set CopyOnWriteArraySet Thread safe set implementation similar to CopyOnWriteArrayList. Best for sets that have read / traversal operations that outnumber mutations.
Set (sorted) ConcurrentSkipListSet Thread-safe set implementation where elements are kept sorted according to natural ordering or by provided Comparator.
Map (sorted) ConcurrentSkipListMap Thread-saf map implementation where the keys are kept sorted according to natural ordering or by provided Comparator.

The above table represents a limited set of concurrency tools provided by Java 1.6. For a complete list of concurrency tools (including collections), see The Collections Framework and Concurrency Utilities.

Increase Lock Granularity

Another common method to reduce synchronized contention is to increase the granularity of the locks. This is commonly done with objects that contain sets of non-interdependent data elements that can be locked separately. For example, consider the following class responsible for keeping a university enrollment roster:

public class EnrollmentRoster {

  ArrayList undergrad = new ArrayList();
  ArrayList graduate = new ArrayList();

  public synchronized void addUndergrad(Student student) {

    undergrad.add(student);

  }

  public synchronized void addGraduate(Student student) {

    graduate.add(student);

  }
}

Figure 6

public class EnrollmentRoster {

  ArrayList undergrad = new ArrayList();
  ArrayList graduate = new ArrayList();

  public void addUndergrad(Student student) {
    synchronized(undergrad) {
      undergrad.add(student);
    }
  }

  public void addGraduate(Student student) {
    synchronized(graduate) {
      graduate.add(student);
    }
  }
}

Figure 7

In Figure 6, two separate threads could not simultaneously add both an undergraduate and a graduate student to the roster at the same time. However, by increasing the lock granularity in Figure 7, both undergraduate and graduate rosters can be updated simultaneously. This change can potentially reduce the number of contended synchronized block calls. (Note: This example is for illustration purposes only. An even better solution is to use the concurrent collections described in the previous section.)

Synchronized Blocking IO

Another factor that can increase the likelihood of a contended synchronized call or even a deadlock condition is the use of blocking IO calls within a synchronized block. When a thread encounters a blocking IO call, that thread will wait on that call until data is returned from the IO call. If the IO call is contained within a synchronized block of code, this wait time increases the chances that another thread will request the lock while the current thread is still waiting on the IO call. Therefore, all of GIFT should avoid making blocking IO calls while holding onto synchronization locks not directly related to the IO operation itself.

Calling yield() in Synchronized Block

As with the other thread optimization techniques discussed in this section, the goal here is to decrease the likelihood of thread contention. When a call is made to yield(), the current running thread will release the CPU allowing other threads to execute. If the thread releases the CPU while in a synchronized block, the lock will still be maintained by the now sleeping thread. Again, this increases the duration the lock is held and increases the likelihood of a contended call. Therefore, all of GIFT should avoid calling yield() while in a synchronized block of code.

Thread Priorities

  • The use of Thread.setPriority() should be limited to relative values.
  • The use of yield() to achieve threading priorities shall be avoided.

The Java threading model provides for the ability to specify and dynamically update Java thread execution priorities. However the implementation and support of threading priorities (through the java.lang.Thread.setPriority() method call) can vary depending on the JVM implementation and the host OS. The Java Language Specification does not require the implementation of thread priorities. Thread priorities for the Sun Linux VM did not support setting of priorities until Java 1.6. GIFT should limit the use of the setPriority() call. Also, due to inconsistent implementation of priority levels, GIFT should avoid the use of absolute priority levels. The relative priority levels of Thread.MIN_PRIORITY, Thread.NORM_PRIORITY, and Thread.MAX_PRIORITY should be used.

GIFT should avoid the use of yield() to simulate threading priority levels. While the yield() call can be used to release any remaining time the current thread has left, effectively lowering its priority, this method is not guaranteed to work. The next thread scheduled to run may be the thread that has just yielded.

Compiler Optimization

  • Code segments that might be optimized out due to compiler determination that the code segment is unreachable should be avoided.

Certain code optimization techniques employed by compilers will attempt to determine if a block of code will ever be executed. If the compiler decides the block of code will not execute, it may not compile that code block. This presents an interesting problem to multithreaded processes. Consider the following example:

public class volitileTimes {

  boolean executeFlag;

  public void doIt() {
    executeFlag = false;

    // Perform some other actions.  However, 
    //  executeFlag is never referenced!
    ...

    if(executeFlag) {

      // Valid code that could execute.
      ...
    }
  }
}

Figure 8
An optimizing compiler might determine that the value of executeFlag is never updated prior to the if statement. The entire code block in the if statement is not compiled. However, the code within the if block could be executed if another thread changes the state of executeFlag between the time it is set to false just after the function doIt() is entered, and the if statement that tests the value of executeFlag.

Code of this nature should be avoided by GIFT. However, the volatile keyword can be used to prevent the compiler from predicting the value of executeFlag in the above example. Therefore, the code block would not be optimized out.

Thread Groups

  • The use of thread groups are optional.

In Java, all threads belong to a thread group. When a thread is created, it is automatically added to the same thread group the parent thread belongs to. Thread groups can be used to organize related threads. Java will create a default thread group that the main thread belongs to. It is the responsibility of the Java application to create additional thread groups if they are desired. Threads can then be created in the new thread group(s).

However, thread groups were originally created to allow the stop(), suspend(), resume(), and setMaxPriority() functions to be applied to all threads within the specified thread group. Since the stop(), suspend(), and resume() methods have been deprecated, significant functionality of the thread group structure has been removed. Therefore, GIFT are not required to use thread groups. However, if a component creates a significant number of threads, the component should consider the use of thread groups to aid in the organization and debugging of the threads.

Various Threading Issues

  • Threading and Swing
    • GIFT Swing Threads and UI
    • Sun Swing / Threads Reference
    • invokeLater and invokeAndWait
  • Threading and Composition (Components)
    • System Components, that need a thread of execution following load, should implement the RunnableComponent Interface.
  • Threading, Repeatability, and Threading Library
  • When to use threads.

Threading and Swing

For a robust and responsive UI experience, threading should be used within the UI components. However, swing components by default are not thread safe components. Care should be taken when utilizing swing components in a threaded environment. As a general rule, swing components should not be directly updated from any thread other than the event-dispatch thread once the swing component has been realized. The SwingUtilities .invokeLater() and .invokeAndWait() methods can be used to safely interact with the swing components from another thread. These calls will place a runnable object on the event-dispatch queue that will be executed when the runnable object reaches the top of the queue. InvokeLater will queue the runnable object and return. InvokeAndWait will queue the runnable object then block/wait until that runnable object has finished execution. Another option is the SwingWorker. For a GUI application with time intensive computing needs, at least two threads are needed, one for performing the task and another for GUI-related activities, and this involves inter-thread communication which can be tricky to implement. The SwingWorker is designed for situations where you need to have a long running task run in a background thread and provide updates to the UI either when done, or while processing. The GIFT UIs and Threads document describes the policy of thread usage within the UI. Additional Swing/Threading references can be found at:

Threads and Swing
How to use threads

When to use threads.

The following is a general outline as to when code segments should be implemented in threads:

  • Time consuming processes (large computational code blocks).
    • Remove from main thread for faster initialization.
    • Remove from event-dispatch thread for GUI responsiveness.
  • Any blocking I/O calls.
  • Periodic processing of code blocks at a given time interval.

Policy Checklist

  1. Avoid using synchronization whenever you can.
  2. Design asynchronous algorithms and approaches whenever possible to minimize overhead associated with synchronous approaches.
  3. Use Java 1.6 concurrency utilities, including concurrent collections, to minimize the need for synchronization blocks.
  4. Synchronizing on methods rather than code is faster and you should sync on methods when the resulting code is identical.
  5. Synchronize on blocks when it is a small portion of the requirement.
  6. Minimize synchronization to the smallest most efficient scope.
  7. Don't use synchronization in read-only or single-threaded queries.
  8. Using synchronized wrappers (obtained from Collections.synchronizedList(List) ) add a level of indirection which can have a high performance cost.
  9. Do not use deprecated threading functions: stop, suspend, and resume.
  10. Avoid making blocking IO calls within synchronized blocks.
  11. Avoid calls to yield() in synchronized block.
  12. Limit the use of setPriority() to relative values.
  13. Too little synchronization leads to corrupt data; too much leads to reduced performance.

Policy Summary

General Threading Issues

  • Exception handling should include signaling of dependent threads of failure in the parent thread.
  • Deprecated threading functions: stop(), suspend(), and resume() shall not be used.
  • The use of daemon threads should be limited.

Locking / Synchronizing

  • Group locks or a locking sequence should be used to prevent deadlocking on individual locks.
  • Calls of wait() should be restricted to method locks.
    (Note: A thread that has called wait() will not resume until notify() or notifyAll() is called or it times out (if timeout interval is specified.))

Thread Optimization

  • Reduce Contention
    • Synchronized blocks should be made as small as possible.
    • For repetitive quick entries into a single synchronized block (i.e. small loop accessing a thread safe collection), the entire loop should be synchronized.
    • Java 1.6 concurrent collections should be used to minimize the need for synchronized code blocks.
    • Lock granularity should be increased to lock unrelated data elements separately.
    • Synchronized code blocks that make blocking IO calls should be avoided.
    • Calls to yield() in a synchronized block should be avoided.

Thread Priorities

  • The use of Thread.setPriority() should be limited to relative values.
  • The use of yield() to achieve threading priorities shall be avoided.

Compiler Optimization

Code segments that might be optimized out due to compiler determination the code segment is unreachable should not be used.

Thread Groups

The use of thread groups are optional.

Various Threading Issues

  • Threading and Swing
    • GIFT Swing Threads and UI
    • Sun Swing / Threads Reference
    • invokeLater and invokeAndWait
  • Threading, Repeatability, and Threading Library
  • When to use threads.