Novell is now a part of Micro Focus

Writing Java Applications on NetWare Using Legacy NLMs

Articles and Tips: article

VISHAL GOENKA
Software Engineer
Network Security

KRISHANU SEAL
Software Consultant
Network Security
Thanks to S. Dinakar for his assistance with this article.

01 Jan 1999


Discusses issues developers face when writing Java applications on NetWare using legacy NLMs in regards to memory management, thread synchronization, and process management.

Introduction

Writing Java applications on NetWare is not any different from writing them on any other Java Virtual Machine (JVM), until it comes to using Java Native Interface (JNI) for using the services of existing (legacy) NLMs. NetWare architecture presents some unique challenges in interfacing Java Applications with NLMs written during the pre-Java era. This document describes some of the nuances in the areas of memory management, thread synchronization, and process management. Much of this work has resulted from our own experience with writing Java applications on NetWare 5 and the issues described here have been validated by writing specific execution cases. This article is targeted at developers writing Java applications using services of legacy NLMs.

We assume the following modules for writing Java applications using legacy NLMs:

Figure 1: Modules for writing Java applications.

Java Classes: All application classes written in Java.

JNI NLM: The module containing function definitions for the Java native methods written in C/C++. It uses JNI to access Java Objects/Functions, and C/C++ functions exported by various NLMs to access NetWare services respectively.

Legacy NLM: NLM that exposes its services as functions, for other NLMs to use. It assumes no knowledge of Java.

This document addresses three key issues:

  • Multiple concurrent invocations of a Java application which uses legacy NLM.

  • Interoperability between virtual memory (VM) mapped and non-VM mapped memory management.

  • Shared lock (for Mutex) accessible from Java and Native code.

Multiple Concurrent Invocations of the Same Java Application

While an application written as an NLM may not have multiple instances executing concurrently (can't load an NLM twice) it's absolutely fine to have multiple concurrent instances of the same Java application. For each execution instance of the Java application, the JVM loads the Java classes in a different address space. The native code (JNI NLM) used by the Java application, however, is shared among all instances of the application, while their data segments are maintained separately for each instance. This raises some interesting issues while interfacing with legacy NLMs.

When the first instance of a Java application starts, the System.loadLibrary() in Java causes the native code (JNI NLM) to be loaded in memory. A concurrent invocation of the same Java application doesn't cause the same JNI NLM to be loaded again; instead, the previously loaded NLM's code is shared. JVM, however, maintains separate data areas for each instance of the loaded JNI NLM. Multiple instances of the same application thus co-exist without knowledge of each other, since all JNI access is routed by the JVM, which takes care of switching to the correct data segment for each instance of the NLM access.

NLMs further loaded by the JNI NLM via the auto-load list (and not via the System.loadLibrary()) are loaded as legacy NLMs. They have a single code and data segments, which are shared by all concurrent instances of the Java application. Lets consider both types of legacy NLMs, ones that are designed for single-thread access, and those that are designed for multiple-thread access.

Thread Unsafe Legacy NLMs

NLMs that are designed for access by a single thread would have unpredictable behavior when concurrently accessed by threads from multiple instances of the same Java application. It turns out, therefore, that legacy NLMs that maintain caller state or are not thread safe (are designed to be used by a single thread) should be loaded by via the System.loadLibrary()method in Java, rather than the auto-load list of the JNI NLM, since each invocation of same Java application then uses different data areas for the same NLM. Each instance of the Java application, however, needs to ensure single-thread access to the legacy NLM.

Thread Safe Legacy NLMs

Legacy NLMs that are designed to be used as libraries maintain caller state in a data structure allocated for each calling NLM, based on its unique NLMHandle (obtainable via GetNLMHandle()), and are also thread safe (access to shared data is monitored via semaphores). Different caller NLMs have different values of the NLMHandle, which helps resource tracking for each individual caller. Multiple concurrent instances of same Java application use the same JNI NLM, and hence the same NLMHandle, and are therefore indistinguishable to the underlying library NLM. Loading the library NLM (legacy NLM) via System.loadLibrary()in Java would create a separate data area for each instance, which would fail in situations where the library NLM's behavior relies on the knowledge of concurrent users (NLMs) of the library. Two approaches are presented for the solution in such cases.

The first approach is conceptually simple but requires changes to the JVM on NetWare. A separate virtual NLM should be created for purpose of resource tracking, etc., for each instance of the native code loaded via System.loadLibrary()by different instances of the same Java application. Each virtual NLM would have a different value of NLMHandle and other NLM specific resource values, thereby making it possible for legacy NLMs to distinguish between multiple instances of the same Java application.

The second approach requires the legacy library NLMs to be modified without requiring a change in the JVM. The NLMHandle is used only for distinguishing different legacy NLMs that use the library. Multiple concurrent instances of the same Java application are distinguished via an application-specific protocol, instead of relying on the NLMHandle of the JNI NLM. A simple protocol is as follows: a Java application instance obtains a one-time run-time identifier during its startup by calling into the library. The library NLM has a function, say GetUniqueHandle(), that generates a unique identifier for each call instance (returns a monotonically increasing number, for example). The Java application identifies itself to the library NLM by passing its unique handle for each method invocation. The NLMHandle of the JNI NLM can, however, be used by the library NLM for those operating system calls that take a valid NLMHandle for resource allocation/tracking.

The Java code sample for the class containing main()is presented below. The code for the library NLM would vary depending upon the mechanism to distinguish between different callers.

static int uniqueHandle;

private static native int getUniqueHandle();

static {

System.loadLibrary("JniNlm");

uniqueHandle = getUniqueHandle();

        }

VM Mapped and Non-VM Mapped Memory Management

JNI users are advised to use sysMalloc()and its companion calls for memory management sysFree(), sysRealloc(), and sysCalloc(), instead of the standard malloc(),free(), realloc(), and calloc() for two reasons. The memory returned by sysMalloc()and its companion calls are backed by virtual memory on NetWare 5. It also provides free tracking of memory resources and allows the OS to free allocated memory when the JVM is unloaded, even if an explicit call to sysFree()is not made.

Legacy NLMs using standard malloc()would work fine with a matching free()as expected. There are situations where the called NLM allocates memory on behalf of the caller NLM and the caller must free it when it no longer needs it. If the caller is a JNI NLM using sysMalloc()and its companion calls, while the called NLM is a legacy NLM using malloc() and its companion calls, this wouldn't work, since you can't allocate using malloc()and free it using sysFree()or vice versa, the former being non-VM mapped, while the latter is VM mapped.

Interoperability issues arise if an NLM calls into a function exported by another NLM, which allocates and returns memory back to the caller, expecting the caller to free it. Such situations can usually be avoided by having the caller allocate memory and pass it to the called NLM function as a parameter, to be filled with meaningful data. However, often the memory requirement cannot be determined beforehand (statically). Runtime calculation of memory requirement, on the other hand, may be quite expensive or even impossible without affecting the state of the operation in progress.

An alternative is to have the called NLM allocate buffer as required, and return the filled buffer to the caller. The caller would then be responsible for freeing the buffer. To ensure consistent usage of malloc()vs sysMalloc()and their respective companion calls, the caller calls another function of the previously called NLM (one which allocated the memory) to free the memory too. This alternative has following problems however.

  • Architecturally, it is not a sound solution to have NLMs share common memory space. This solution would disallow internal relocation of allocated buffers within the called NLM, since the caller holds reference to a memory buffer allocated by it.

  • The called NLM is charged for the memory allocated by it on behalf of the caller if the caller fails to free the memory.

In the case of a Java application using legacy NLM, suppose the JNI NLM (caller) calls into the legacy NLM (called) for an operation that returns a chunk of data. The memory required to hold the data needs to be allocated by either the caller or the called NLM. In either case, the memory requirement can only be calculated by the called NLM (the one that generates the data). Once the memory is allocated, it can be deallocated by either the caller or the called NLM. The following table summarizes the issues presented above.


Memory deallocated by CalledNLM (legacy NLM, using free())
Memory deallocated by Caller NLM (JNI NLM, using sysFree() )

Memory allocated by Called NLM (Legacy NLM, using malloc())

If caller doesn't ask the called NLM to free the memory, the called NLM is charged for the memory leak.

Disallows the called NLM to relocate memory.

The memory management calls may not be interoperable (e.g., malloc & sysFree aren't).

Called NLM would be charged for memory leak it the caller doesn't free it.

Memory allocated by Caller NLM (JNI NLM, using sysMalloc())

The caller would need to call the called NLM twice, once to find the memory requirement, next to allocate memory. This may be expensive in most cases, and impossible in some cases to find the memory requirement without changing the state of the operation.

The memory management calls may not be interoperable (e.g., sysMalloc& freearen't).

The caller would need to call the called NLM twice, once to find the memory requirement, next to allocate memory. This may be +expensive in most cases, and impossible in some cases to find the memory requirement without changing the state of the operation.

The approach we propose is to have the called NLM allocate the required memory buffer, fill it with meaningful data, and put it in an internal queue. The called NLM returns the buffer size and a handle (which it can map to the allocated buffer) to the caller. The caller allocates its own buffer of given size and makes another function call into the previously called NLM with its allocated buffer and the handle as parameters. The called NLM uses the handle to locate the buffer in its internal queue, copies the contents onto the caller provided buffer and then dequeues and frees its internal buffer. This solution has the following advantages:

  • It avoids the overhead of calculating the memory requirements before the actual operations.

  • Each NLM manages its own memory; hence there are no interoperability issues between VM mapped and non-VM mapped memory management calls.

  • Architecturally it allows internal relocation of memory by the NLMs, since NLMs don't share common memory space.

  • The called NLM temporarily allocates memory on behalf of the caller and keeps the buffer in its internal queue till the caller makes another call to retrieve the results. The called NLM could choose to clean up all non-retrieved results, based on a TTL (time to live) or any other policy, for example, so that it is not charged too long for memory allocated on behalf of the caller.

The following fragments of sample code (without error checks) illustrate the implementation:

/* Legacy NLM */

int Enqueue(void *buffer);

void *Dequeue(int handle);



int usefulFunction(<input-argument-list>, int *handle, int *size) { <
   void *buffer;    /* Temporary buffer */

   .....

   /* Do useful calculations and fill buffer with generated data */

   /* use malloc() to allocate buffer of required size */

   *size = getSizeOfBuffer(buffer); 

   *handle = Enqueue(buffer);

   return SUCCESS_CODE;

}



void getResultBuffer(void *buffer, int handle, int size) {

   void *tmp_buffer;

   tmp_buffer = Dequeue(handle);

   memcpy(buffer, tmp_buffer, size);

   free(tmp_buffer);

}



....



/* JNI NLM */

JNIEXPORT jbyteArray JNICALL Java_ClassX_MethodY 

        (JNIEnv *jenv, jobject this, <input-argument-list>) {<


   int size;

   int handle;

   jbyte * buffer;

   jbyteArray retval;

        ...

   usefulFunction(<input-args>, <handle, <size);<
   buffer = (jbyte *)sysMalloc(size);

        ...

   getResultBuffer((void *)buffer, handle, size);

   retval = (*jenv)->NewByteArray(jenv, (jsize)size);

   (*jenv)->SetByteArrayRegion(jenv, retval,(jsize)0, (jsize)size,

                        retval);     

          sysFree(buffer);  // No memory leaks please !!

        return retval;

}

    ...



/* Java Code */

public class ClassX {

    ...

    native byte [] MethodY(<input-argument-list>);<
    ...

}

Shared Lock (for Mutex) between Java and Native Code

Method/block synchronization via synchronized directive is a pure Java solution for mutex, while semaphores is a pure native (C programming language) solution. It is often necessary to have a shared lock for locking/unlocking operations accessible from Java as well as native code. An example situation is where a lock is set in Java code and must be reset at a later point in native code.

A NetWare semaphore can be used for mutex, both from Java and Native code, by providing native method equivalents in Java for each of the semaphore APIs in NetWare OpenLocalSemaphore(), CloseLocalSemaphore(), WaitOnLocalSemaphore(), SignalLocalSemaphore(), etc. The synchronizedmethods/blocks in Java, however, lock on a Java object and hence a NetWare semaphore is not interoperable with the synchronized directive in Java.

An interoperable and recommended approach is to use the monitor associated with each Java object for synchronization. Consider the following Java code fragment.

Synchronized (this) {

          doSomething();

}

This code is functionally equivalent to:

MonitorEnter(this);

doSomething();

MonitorExit(this);

In synchronized methods (as in synchronized blocks), the monitor is entered and exited when the method starts and returns respectively. Replacing the MonitorEnter()and MonitorEnter()primitives with a synchronizeddirective provides simplicity of programming at the expense of finer control. The following code fragment, for example, written using MonitorEnter()and MonitorExit() cannot be written usingsynchronizeddirective without significant code reorganization.

public void method1(int x)   

{

               MonitorEnter(this);

               x = x++;

               doSomething(x);

}



public void doSomething(int x)

{

               x--;

               MonitorExit(this);

               doSomethingElse();

}

Though MonitorEnter() and MonitorExit()methods are not provided in Java, they can be easily implemented as native methods using JNI, as illustrated below. This allows the monitor to be entered in the Java code and exited within native code, or vice versa, thereby providing a fine grained, semaphore like behavior. MonitorEnter()/MonitorExit()is quite interoperable with the synchronizeddirective, since both use the same object monitor. Also, there is no overhead of creating a semaphore, since all Java objects have a monitor associated with them by default.

/* Sample Java Code for defining MonitorEnter/MonitorExit */

public class X {

    ...

    private native void monitorEnter();

    private native void monitorExit();

    ...

}



/* JNI Implementation */

JNIEXPORT void JNICALL Java_X_monitorEnter(JNIEnv *env, jobject this){

    jint err;

    jclass exception;

    

    err = (*env)->MonitorEnter(env, this);



    if (err)    { // Throw an Exception

        exception = 

            (*env)->FindClass(env, "java/lang/RuntimeException");

        (*env)->ThrowNew(env, exception, "Error in MonitorEnter");

    }

}



/* JNI Implementation */

JNIEXPORT void JNICALL Java_X_monitorExit(JNIEnv *env, jobject this){

    jint err;

    jclass exception;

    

    err = (*env)->MonitorExit(env, this);

    ... /* throw exception as above on error */

}

It must be noted that the thread of execution in the above discussion is assumed to be a Java thread. A native NetWare thread cannot access the JVM, and hence the monitor associated with a Java object (unless the JVM itself is invoked by a native thread using the JNI Invocation APIs). Thread synchronization between a Java thread and a native NetWare thread is therefore best achieved using NetWare semaphores, by providing native method equivalents in Java for each of the semaphore APIs in NetWare OpenLocalSemaphore(), CloseLocalSemaphore(), WaitOnLocalSemaphore(), SignalLocalSemaphore(), etc.

* Originally published in Novell AppNotes


Disclaimer

The origin of this information may be internal or external to Novell. While Novell makes all reasonable efforts to verify this information, Novell does not make explicit or implied claims to its validity.

© Copyright Micro Focus or one of its affiliates