3/31/2007

Tiger Provided option for getting various Thread States

Prior to Java 5, isAlive() was commonly used to test a threads state. If isAlive() returned false the thread was either new or terminated but there was simply no way to differentiate between the two. Starting with the release of Tiger (Java 5) you can now get what state a thread is in by using the getState() method which returns an Enum of Thread.States. A thread can only be in one of the following states at a given point in time.

NEW A Fresh thread that has not yet started to execute.
RUNNABLE A thread that is executing in the Java virtual machine.
BLOCKED A thread that is blocked waiting for a monitor lock.
WAITING A thread that is wating to be notified by another thread.
TIMED_WAITING A thread that is wating to be notified by another thread for a specific amount of time
TERMINATED A thread whos run method has ended.


The folowing code prints out all thread states.

public class ThreadStates{
public static void main(String[] args){
Thread t = new Thread();
Thread.State e = t.getState();
Thread.State[] ts = e.values();
for(int i = 0; i < ts.length; i++){
System.out.println(ts[i]);
}
}
}

3/29/2007

JPC Project




JPC is a pure Java emulation of an x86 PC with fully virtual peripherals. It runs anywhere you have a JVM, whether x86, RISC, mobile phone, set-top box, possibly even your refrigerator! All this, with the bulletproof security and stability of Java technology.

JPC creates a virtual machine upon which you can install your favourite operating system in a safe, flexible and powerful way. It aims to give you complete control over your favorite PC software's execution environment, whatever your real hardware or operating system, and JPC's multilayered security makes it the safest solution for running the most dangerous software in quarantine - ideal for archiving viruses, hosting honeypots, and protecting your machine from malicious or unstable software.

JPC has been developed since August 2005 in Oxford University's Subdepartment of Particle Physics. It can be run on a number of devices, from PC's to mobile phones, and you can see some of the results of JPC in action (more soon!). Some might see JPC as part of a nefarious plot by mad scientists who want to harness every last CPU in the world for their research - but we prefer to see JPC as Java-hardened protection against their buggy programs.

3/28/2007

Faster Deep Copies of Java Objects ( Shallow Copy and Deep Copy )

Faster Deep Copies of Java Objects

The java.lang.Object root superclass defines a clone() method that will, assuming the subclass implements the java.lang.Cloneable interface, return a copy of the object. While Java classes are free to override this method to do more complex kinds of cloning, the default behavior of clone() is to return a shallow copy of the object. This means that the values of all of the origical object’s fields are copied to the fields of the new object.

A property of shallow copies is that fields that refer to other objects will point to the same objects in both the original and the clone. For fields that contain primitive or immutable values (int, String, float, etc…), there is little chance of this causing problems. For mutable objects, however, cloning can lead to unexpected results. Figure 1 shows an example.



import java.util.Vector;


public class Example1 {


public static void main(String[] args) {

// Make a Vector

Vector original = new Vector();


// Make a StringBuffer and add it to the Vector

StringBuffer text = new StringBuffer(”The quick brown fox”);

original.addElement(text);


// Clone the vector and print out the contents

Vector clone = (Vector) original.clone();

System.out.println(”A. After cloning”);

printVectorContents(original, “original”);

printVectorContents(clone, “clone”);

System.out.println(

“——————————————————–”);

System.out.println();


// Add another object (an Integer) to the clone and

// print out the contents

clone.addElement(new Integer(5));

System.out.println(”B. After adding an Integer to the clone”);

printVectorContents(original, “original”);

printVectorContents(clone, “clone”);

System.out.println(

“——————————————————–”);

System.out.println();


// Change the StringBuffer contents

text.append(” jumps over the lazy dog.”);

System.out.println(”C. After modifying one of original’s elements”);

printVectorContents(original, “original”);

printVectorContents(clone, “clone”);

System.out.println(

“——————————————————–”);

System.out.println();

}


public static void printVectorContents(Vector v, String name) {

System.out.println(” Contents of \”" + name + “\”:”);


// For each element in the vector, print out the index, the

// class of the element, and the element itself

for (int i = 0; i < v.size(); i++) {

Object element = v.elementAt(i);

System.out.println(” ” + i + ” (” +

element.getClass().getName() + “): ” +

element);

}

System.out.println();

}


}



Figure 1. Modifying Vector contents after cloning

In this example we create a Vector and add a StringBuffer to it. Note that StringBuffer (unlike, for example, String is mutable — it’s contents can be changed after creation. Figure 2 shows the output of the example in Figure 1.



> java Example1


A. After cloning

Contents of “original”:

0 (java.lang.StringBuffer): The quick brown fox


Contents of “clone”:

0 (java.lang.StringBuffer): The quick brown fox


——————————————————–


B. After adding an Integer to the clone

Contents of “original”:

0 (java.lang.StringBuffer): The quick brown fox


Contents of “clone”:

0 (java.lang.StringBuffer): The quick brown fox

1 (java.lang.Integer): 5


——————————————————–


C. After modifying one of original’s elements

Contents of “original”:

0 (java.lang.StringBuffer): The quick brown fox jumps over the lazy dog.


Contents of “clone”:

0 (java.lang.StringBuffer): The quick brown fox jumps over the lazy dog.

1 (java.lang.Integer): 5


——————————————————–



Figure 2. Output from the example code in Figure 1

In the first block of output (”A”), we see that the clone operation was successful: The original vector and the clone have the same size (1), content types, and values. The second block of output (”B”) shows that the original vector and its clone are distinct objects. If we add another element to the clone, it only appears in the clone, and not in the original. The third block of output (”C”) is, however, a little trickier. Modifying the StringBuffer that was added to the original vector has changed the value of the first element of both the original vector and its clone. The explanation for this lies in the fact that clone made a shallow copy of the vector, so both vectors now point to the exact same StringBuffer instance.

This is, of course, sometimes exactly the behavior that you need. In other cases, however, it can lead to frustrating and inexplicable errors, as the state of an object seems to change “behind your back”.

The solution to this problem is to make a deep copy of the object. A deep copy makes a distinct copy of each of the object’s fields, recursing through the entire graph of other objects referenced by the object being copied. The Java API provides no deep-copy equivalent to Object.clone(). One solution is to simply implement your own custom method (e.g., deepCopy()) that returns a deep copy of an instance of one of your classes. This may be the best solution if you need a complex mixture of deep and shallow copies for different fields, but has a few significant drawbacks:


  1. You must be able to modify the class (i.e., have the source code) or implement a subclass. If you have a third-party class for which you do not have the source and which is marked final, you are out of luck.
  2. You must be able to access all of the fields of the class’s superclasses. If significant parts of the object’s state are contained in private fields of a superclass, you will not be able to access them.
  3. You must have a way to make copies of instances of all of the other kinds of objects that the object references. This is particularly problematic if the exact classes of referenced objects cannot be known until runtime.
  4. Custom deep copy methods are tedious to implement, easy to get wrong, and difficult to maintain. The method must be revisited any time a change is made to the class or to any of its superclasses.

A common solution to the deep copy problem is to use Java Object Serialization (JOS). The idea is simple: Write the object to an array using JOS’s ObjectOutputStream and then use ObjectInputStream to reconsistute a copy of the object. The result will be a completely distinct object, with completely distinct referenced objects. JOS takes care of all of the details: superclass fields, following object graphs, and handling repeated references to the same object within the graph. Figure 3 shows a first draft of a utility class that uses JOS for making deep copies.



import java.io.IOException;

import java.io.ByteArrayInputStream;

import java.io.ByteArrayOutputStream;

import java.io.ObjectOutputStream;

import java.io.ObjectInputStream;


/**

* Utility for making deep copies (vs. clone()’s shallow copies) of

* objects. Objects are first serialized and then deserialized. Error

* checking is fairly minimal in this implementation. If an object is

* encountered that cannot be serialized (or that references an object

* that cannot be serialized) an error is printed to System.err and

* null is returned. Depending on your specific application, it might

* make more sense to have copy(…) re-throw the exception.

*

* A later version of this class includes some minor optimizations.

*/

public class UnoptimizedDeepCopy {


/**

* Returns a copy of the object, or null if the object cannot

* be serialized.

*/

public static Object copy(Object orig) {

Object obj = null;

try {

// Write the object out to a byte array

ByteArrayOutputStream bos = new ByteArrayOutputStream();

ObjectOutputStream out = new ObjectOutputStream(bos);

out.writeObject(orig);

out.flush();

out.close();


// Make an input stream from the byte array and read

// a copy of the object back in.

ObjectInputStream in = new ObjectInputStream(

new ByteArrayInputStream(bos.toByteArray()));

obj = in.readObject();

}

catch(IOException e) {

e.printStackTrace();

}

catch(ClassNotFoundException cnfe) {

cnfe.printStackTrace();

}

return obj;

}


}



Figure 3. Using Java Object Serialization to make a deep copy

Unfortunately, this approach has some problems, too:


  1. It will only work when the object being copied, as well as all of the other objects references directly or indirectly by the object, are serializable. (In other words, they must implement java.io.Serializable.) Fortunately it is often sufficient to simply declare that a given class implements java.io.Serializable and let Java’s default serialization mechanisms do their thing.
  2. Java Object Serialization is slow, and using it to make a deep copy requires both serializing and deserializing. There are ways to speed it up (e.g., by pre-computing serial version ids and defining custom readObject() and writeObject() methods), but this will usually be the primary bottleneck.
  3. The byte array stream implementations included in the java.io package are designed to be general enough to perform reasonable well for data of different sizes and to be safe to use in a multi-threaded environment. These characteristics, however, slow down ByteArrayOutputStream and (to a lesser extent) ByteArrayInputStream.

The first two of these problems cannot be addressed in a general way. We can, however, use alternative implementations of ByteArrayOutputStream and ByteArrayInputStream that makes three simple optimizations:


  1. ByteArrayOutputStream, by default, begins with a 32 byte array for the output. As content is written to the stream, the required size of the content is computed and (if necessary), the array is expanded to the greater of the required size or twice the current size. JOS produces output that is somewhat bloated (for example, fully qualifies path names are included in uncompressed string form), so the 32 byte default starting size means that lots of small arrays are created, copied into, and thrown away as data is written. This has an easy fix: construct the array with a larger inital size.
  2. All of the methods of ByteArrayOutputStream that modify the contents of the byte array are synchronized. In general this is a good idea, but in this case we can be certain that only a single thread will ever be accessing the stream. Removing the synchronization will speed things up a little. ByteArrayInputStream’s methods are also synchronized.
  3. The toByteArray() method creates and returns a copy of the stream’s byte array. Again, this is usually a good idea: If you retrieve the byte array and then continue writing to the stream, the retrieved byte array should not change. For this case, however, creating another byte array and copying into it merely wastes cycles and makes extra work for the garbage collector.

An optimized implementation of ByteArrayOutputStream is shown in Figure 4.


import java.io.OutputStream;

import java.io.IOException;

import java.io.InputStream;

import java.io.ByteArrayInputStream;


/**

* ByteArrayOutputStream implementation that doesn’t synchronize methods

* and doesn’t copy the data on toByteArray().

*/

public class FastByteArrayOutputStream extends OutputStream {

/**

* Buffer and size

*/

protected byte[] buf = null;

protected int size = 0;


/**

* Constructs a stream with buffer capacity size 5K

*/

public FastByteArrayOutputStream() {

this(5 * 1024);

}


/**

* Constructs a stream with the given initial size

*/

public FastByteArrayOutputStream(int initSize) {

this.size = 0;

this.buf = new byte[initSize];

}


/**

* Ensures that we have a large enough buffer for the given size.

*/

private void verifyBufferSize(int sz) {

if (sz > buf.length) {

byte[] old = buf;

buf = new byte[Math.max(sz, 2 * buf.length )];

System.arraycopy(old, 0, buf, 0, old.length);

old = null;

}

}


public int getSize() {

return size;

}


/**

* Returns the byte array containing the written data. Note that this

* array will almost always be larger than the amount of data actually

* written.

*/

public byte[] getByteArray() {

return buf;

}


public final void write(byte b[]) {

verifyBufferSize(size + b.length);

System.arraycopy(b, 0, buf, size, b.length);

size += b.length;

}


public final void write(byte b[], int off, int len) {

verifyBufferSize(size + len);

System.arraycopy(b, off, buf, size, len);

size += len;

}


public final void write(int b) {

verifyBufferSize(size + 1);

buf[size++] = (byte) b;

}


public void reset() {

size = 0;

}


/**

* Returns a ByteArrayInputStream for reading back the written data

*/

public InputStream getInputStream() {

return new FastByteArrayInputStream(buf, size);

}


}



Figure 4. Optimized version of ByteArrayOutputStream

The getInputStream() method returns an instance of an optimized version of ByteArrayInputStream that has unsychronized methods. The implementation of FastByteArrayInputStream is shown in Figure 5.



import java.io.InputStream;

import java.io.IOException;


/

* ByteArrayInputStream implementation that does not synchronize methods.

*/

public class FastByteArrayInputStream extends InputStream {

/

* Our byte buffer

*/

protected byte[] buf = null;


/

* Number of bytes that we can read from the buffer

*/

protected int count = 0;


/

* Number of bytes that have been read from the buffer

*/

protected int pos = 0;


public FastByteArrayInputStream(byte[] buf, int count) {

this.buf = buf;

this.count = count;

}


public final int available() {

return count - pos;

}


public final int read() {

return (pos < count) ? (buf[pos++] & 0xff) : -1;

}


public final int read(byte[] b, int off, int len) {

if (pos >= count)

return -1;


if ((pos + len) > count)

len = (count - pos);


System.arraycopy(buf, pos, b, off, len);

pos += len;

return len;

}


public final long skip(long n) {

if ((pos + n) > count)

n = count - pos;

if (n < 0)

return 0;

pos += n;

return n;

}


}



Figure 5. Optimized version of ByteArrayInputStream.

Figure 6 shows a version of a deep copy utility that uses these classes:



import java.io.IOException;

import java.io.ByteArrayInputStream;

import java.io.ByteArrayOutputStream;

import java.io.ObjectOutputStream;

import java.io.ObjectInputStream;


/**

* Utility for making deep copies (vs. clone()’s shallow copies) of

* objects. Objects are first serialized and then deserialized. Error

* checking is fairly minimal in this implementation. If an object is

* encountered that cannot be serialized (or that references an object

* that cannot be serialized) an error is printed to System.err and

* null is returned. Depending on your specific application, it might

* make more sense to have copy(…) re-throw the exception.

*/

public class DeepCopy {


/**

* Returns a copy of the object, or null if the object cannot

* be serialized.

*/

public static Object copy(Object orig) {

Object obj = null;

try {

// Write the object out to a byte array

FastByteArrayOutputStream fbos =

new FastByteArrayOutputStream();

ObjectOutputStream out = new ObjectOutputStream(fbos);


out.writeObject(orig);

out.flush();

out.close();


// Retrieve an input stream from the byte array and read

// a copy of the object back in.

ObjectInputStream in =

new ObjectInputStream(fbos.getInputStream());

obj = in.readObject();

}

catch(IOException e) {

e.printStackTrace();

}

catch(ClassNotFoundException cnfe) {

cnfe.printStackTrace();

}

return obj;

}


}



Figure 6. Deep-copy implementation using optimized byte array streams

The extent of the speed boost will depend on a number of factors in your specific application (more on this later), but the simple class shown in Figure 7 tests the optimized and unoptimized versions of the deep copy utility by repeatedly copying a large object.



import java.util.Hashtable;

import java.util.Vector;

import java.util.Date;


public class SpeedTest {


public static void main(String[] args) {

// Make a reasonable large test object. Note that this doesn’t

// do anything useful — it is simply intended to be large, have

// several levels of references, and be somewhat random. We start

// with a hashtable and add vectors to it, where each element in

// the vector is a Date object (initialized to the current time),

// a semi-random string, and a (circular) reference back to the

// object itself. In this case the resulting object produces

// a serialized representation that is approximate 700K.

Hashtable obj = new Hashtable();

for (int i = 0; i < 100; i++) {

Vector v = new Vector();

for (int j = 0; j < 100; j++) {

v.addElement(new Object[] {

new Date(),

"A random number: " + Math.random(),

obj

});

}

obj.put(new Integer(i), v);

}


int iterations = 10;


// Make copies of the object using the unoptimized version

// of the deep copy utility.

long unoptimizedTime = 0L;

for (int i = 0; i < iterations; i++) {

long start = System.currentTimeMillis();

Object copy = UnoptimizedDeepCopy.copy(obj);

unoptimizedTime += (System.currentTimeMillis() - start);


// Avoid having GC run while we are timing...

copy = null;

System.gc();

}


// Repeat with the optimized version

long optimizedTime = 0L;

for (int i = 0; i < iterations; i++) {

long start = System.currentTimeMillis();

Object copy = DeepCopy.copy(obj);

optimizedTime += (System.currentTimeMillis() - start);


// Avoid having GC run while we are timing...

copy = null;

System.gc();

}


System.out.println("Unoptimized time: " + unoptimizedTime);

System.out.println(" Optimized time: " + optimizedTime);

}


}



Figure 7. Testing the two deep copy implementations.

A few notes about this test:


  • The object that we are copying is large. While somewhat random, it will generally have a serialized size of around 700 Kbytes.
  • The most significant speed boost comes from avoid extra copying of data in FastByteArrayOutputStream. This has several implications:

    1. Using the unsynchronized FastByteArrayInputStream speeds things up a little, but the standard java.io.ByteArrayInputStream is nearly as fast.
    2. Performance is mildly sensitive to the initial buffer size in FastByteArrayOutputStream, but is much more sensitive to the rate at which the buffer grows. If the objects you are copying tend to be of similar size, copying will be much faster if you initialize the buffer size and tweak the rate of growth.

  • Measuring speed using elapsed time between two calls to System.currentTimeMillis() is problematic, but for single-threaded applications and testing relatively slow operations it is sufficient. A number of commercial tools (such as JProfiler) will give more accurate per-method timing data.
  • Testing code in a loop is also problematic, since the first few iterations will be slower until HotSpot decides to compile the code. Testing larger numbers of iterations aleviates this problems.
  • Garbage collection further complicates matters, particularly in cases where lots of memory is allocated. In this example, we manually invoke the garbage collector after each copy to try to keep it from running while a copy is in progress.

These caveats aside, the performance difference is sigificant. For example, the code as shown in Figure 7 (on a 500Mhz G3 Macintosh iBook running OSX 10.3 and Java 1.4.1) reveals that the unoptimized version requires about 1.8 seconds per copy, while the optimized version only requires about 1.3 seconds. Whether or not this difference is signficant will, of course, depend on the frequency with which your application does deep copies and the size of the objects being copied.

3/27/2007

Comparing Arrays, Lists, and Maps

Generally when we tend to use either Arryalist/ Vector based on the basic requirement if that has to be synchronized or not. Other than that we minimally consider the responsiveness of the algorithmic implementation for each at the requirements.

For example: if we know there will be 10 objects that we need to store and iterated every time. Given the fancy of API we normally tend to use ArrayList / Vector irrespective of thinking of Array which is more powerful and very good implementation for known size.

There are several other parameters we might need to understand as developers which implementation that we need to choose based on the requirements. Some of them could be

1. Insert elements at the end of a list
2. Insert elements in the beginning of a list
3. Insert elements at random positions in a list
4. Access elements from the first to the last
5. Access elements from the last to the first
6. Access elements in random order
7. Update elements in random order


Here is an excellent article that explains the collection on a case to case basis

XML as database

Here is how you can connect to the database:
try { 
//Load the DB2 JDBC Type 2 Driver with DriverManager
Class.forName("COM.ibm.db2.jdbc.app.DB2Driver");
}catch (ClassNotFoundException e) {
e.printStackTrace();
}

Getting connection:
connection = DriverManager.getConnection(url, user, pass);
the prepare statement:PreparedStatement stmt = connection.prepareStatement(sql);
Result Set:
option 1:ResultSet resultSet = stmt.executeQuery();
option 2:InputStream inputStream = resultSet.getBinaryStream(1);
option 3:DB2Xml db2xml = (DB2Xml) resultSet.getObject(1);

Selecting a XML value:String sql = "SELECT PID, DESCRIPTION from XMLPRODUCT where PID = ?";
PreparedStatement stmt = connection.prepareStatement(sql);stmt.setString(1, "100-105-09");
ResultSet resultSet = stmt.executeQuery();
String xml = resultSet.getString("DESCRIPTION"); // orInputStream inputStream = resultSet.getBinaryStream("DESCRIPTION"); // orReader reader = resultSet.getCharacterStream("DESCRIPTION"); // orDB2Xml db2xml = (DB2Xml) resultSet.getObject("DESCRIPTION");


you can use other methods available in java too..!

getString( )
getBinaryStream( )
getCharacterStrem( )
getObject( )


Inserting in XML file:

String sql = "INSERT INTO xmlproduct VALUES(?, ?)";
PreparedStatement stmt = connection.prepareStatement(sql);
stmt.setString(1, "100-105-09");
File binFile = new File("productBinIn.xml");
InputStream inBin = new FileInputStream(xmlFile);
stmt.setBinaryStream(2, inBin, (int) binFile.getLength());stmt.execute();


XML database driver

DWR - Easy AJAX for JAVA

DWR is a Java open source library which allows you to write Ajax web sites.

It allows code in a browser to use Java functions running on a web server just as if it was in the browser.

DWR consists of two main parts:

  • A Java Servlet running on the server that processes requests and sends responses back to the browser.
    JavaScript running in the browser that sends requests and can dynamically update the webpage.

  • DWR works by dynamically generating Javascript based on Java classes. The code does some Ajax magic to make it feel like the execution is happening on the browser, but in reality the server is executing the code and DWR is marshalling the data back and forwards.

  • This method of remoting functions from Java to JavaScript gives DWR users a feel much like conventional RPC mechanisms like RMI or SOAP, with the benefit that it runs over the web without requiring web-browser plug-ins.

    Java is fundamentally synchronous where Ajax is asynchronous. So when you call a remote method, you provide DWR with a callback function to be called when the data has been returned from the network.

    The diagram shows how DWR can alter the contents of a selection list as a result of some Javascript event like onclick.



    DWR dynamically generates an AjaxService class in Javascript to match some server-side code. This is called by the eventHandler. DWR then handles all the remoting details, including converting all the parameters and return values between Javascript and Java. It then executes the supplied callback function (populateList) in the example below which uses a DWR utility function to alter the web page.

    3/22/2007

    Generics FAQ

    Know more about Generics....You have a good article that explains generics on a case to case basis

    Generics FAQ

    3/15/2007

    Tomcat Vs OC4J

    Couple of differences between Tomcat and Oracle Application Server (OC4J)...There could be more...

    org.w3c.dom.Document:getElementsByTagName()
    Tomcat (Xerces): doc.getElementsByTagName("SOAP-ENV:Envelope") is valid. It treats the namespace as if it were just part of the tag name.
    OC4J (oraclexmlparserv2): doc.getElementsByTagName("SOAP-ENV:Envelope") is not valid
    Solution: Use doc.getElementsByTagNameNS(). To use this method, you must make sure to call setNamespaceAware(true) on your DocumentBuilderFactory.

    javax.xml.parsers.DocumentBuilderFactory:isNamespaceAware()
    Tomcat (Xerces): Defaults to false
    OC4J (oraclexmlparserv2): Defaults to true (which contradicts the documentation http://java.sun.com/j2se/1.5.0/docs/api/javax/xml/parsers/DocumentBuilderFactory.html#setNamespaceAware(boolean))

    Class:forName()
    Tomcat: Always Throws ClassNotFoundException if the class isn't found
    OC4J: Throws NoClassDefFoundError in some situations if the class isn't found

    3/14/2007

    Are generics fully functional...?

    Consider the snippet,

    Vector <String>strObject = new Vector <String>();
    strObject.add("STR1");
    strObject.add("STR2");


    Now adding an int or any other type of object into this collection would throw a compile time error, which sounds good...!
    //strObject.add(12) -- Compilation error

    On the other hand,
    Vector newVector = strObject;
    newVector.add(12);

    This doesn't throw any compilation errors rather, you see a warning to recompile with -Xlint, which many a times we ignore....!

    serialVersionUID

    static final long serialVersionUID = -4544769666886838818L;

    What does this mean to Gosling's tool..?

    It's used when deSerializing an object, to determine whether that object is compatible with that object's class file in the JVM doing the deserialization. If the serialVersionUID of the class file doesn't match the serialVersionUID of the deserialized object, you'll get an InvalidClassException. If the class file doesn't explicitly declare a serialVersionUID, then the serialization runtime has to compute it, which is a relatively expensive process.

    So what makes the Object that was serialized and its class incompatible?
       1. Deleting fields from the class
       2. Changing the type of a field
       3. Changing a class from Serializable to Externalizable or vice versa


    Keep the Pandoras box closed by adding a serialVersionUID to the class file.

    Firefox killer Application - All Peer

    Fed up with email messages bouncing because your attachments are too big? There's a simple solution -- get a very good new Firefox add-in,   AllPeers .

    It's a simple peer-to-peer file-sharing app. Set it up, select files you want to share, and who you want to share them with, and the person gets a notification. He can then grab them. It's that simple.

    For the moment, AllPeers runs only under Firefox, but expect it to work with Internet Explorer some time in the future. If you've ever had a problem with sending a file via email, it's worth a look..!

    3/09/2007

    Singleton Pattern

    The Objective of the pattern is,at any given time there can be only one instance of a class.

    A singleton pattern can be used to create a Connection Pool. We can have connection object as singleton to avoid wastage of
    resources.

    Steps to create a singleton pattern,

    1. We need to have default constructor of the class as private, which prevents instantiation of the object by other classes.

    2. Define a static method that returns singleton object, If object doesn't exists new object is returned otherwise existing
      object would be returned.

    3. To avoid the object being cloned, override the clone method of the Object class.

    4. If a application is going to be Multithreaded one then there can be chances wherein two therads at the sametime can access the static method which may create more than one instance of the singleton class. To avoid the above scenario we need to have this static method as synchronized.



    class Singleton {
    private static Singleton singletonObject;
    private Singleton{
    }

    public static synchronized Singleton getInstance(){
    if (singletonObject == null){
    singletonObject = new Singleton();
    }
    return singletonObject;
    }

    public Object clone()throws CloneNotSupportedException{
    throw new CloneNotSupportedException();
    }
    }

    public class SingletonObjectDemo{
    public static void main(String args[]){
    //Singleton obj = new Singleton(); Would cause Compilation error.
    //create the Singleton Object..
    Singleton obj = Singleton.getInstance();

    }
    }

    NoClassDefFoundError Vs ClassNotFoundException

    What..When...?
    I always wonder, why Java is not intelligent enough to fix problems in the code when it knows something goes wrong; rather than throwing just exceptions? May be Gosling wants it that way, saving donuts for him!

    Now read on...

    A.java

    1  public class A{
    2
    3  public static void main(String []s) throws classNotFoundException{
    4    B obj = new B();
    5     // class.forName("B).newInstance()
    6   }
    7
    8  }

    B.java

    1 public class B{
    2 }


    javac A.java
    java A

    This would throw a NoClassDefFoundError. Comment out line 4 and uncomment line 5. Now it would throw ClassNotFoundException.

    ClassNotFoundexception is thrown when an application tries to load in a class through its string name using:


  • The forName method in class Class.



  • The findSystemClass method in class ClassLoader .



  • The loadClass method in class ClassLoader.

  • Otherwise NoClassDefFoundError is thrown.

    The interesting fact is that, this behaves differently when your application runs in OC4j Oracle application server. i.e. your application would throw a NoClassDefFoundError to ClassNotFoundException.

    If you dont want to miss your donut, add two catch blocks in your code...I already missed one..:(

    try{
     
     
     
        }catch(ClassNotFoundexception e1){
        }catch(NoClassDefFoundError e2){
       }

    Added on 9/14/2012:
    Further I also see of there are issues with class initialization, there could be NoClassDefFoundError.

    Once there is an initialization failure, JVM marks the class as bad, and subsequent attempts to use or access the class result in NoClassDefFoundError 

    Web 2.0 in 5 minutes

    Now you just need 5 minutes to know what Web 2.0 is all about.

    3/05/2007

    Google Maps @ Work..!

    In the first part of this article, we will discuss how to integrate a feature-rich map into your application in record time, by using the Google Maps API. The Google Maps API is an easy-to-use JavaScript API that enables you to embed interactive maps directly in your application's web pages. And as we will see, it is easy to extend it to integrate real-time server requests using Ajax.

    Getting started with the Google Maps API is easy. There is nothing to download; you just need to sign up to obtain a key to use the API. There is no charge for publicly accessible sites (for more details, see the Sign up for the Google Maps API page). You need to provide the URL of your website, and, when your application is deployed on a website, your key will only work from this URL. One annoying thing about this constraint is that you need to set up a special key to use for your development or test machines: for the sample code, I had to create a special key for http://localhost:8080/maps, for example.

    Once you have a valid key, you can see the Google Maps API in action. Let's start off with something simple: displaying a map on our web page.

    Although the API is not particularly complicated, working with Google Maps requires a minimal knowledge of JavaScript. You also need to know the latitude and longitude of the area you want to display. If you're not sure, you can find this sort of information on the internet, or even by looking in an atlas!

    The full code listing of our first Google Map is shown here:


    <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
    <html>
    <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
    <title>Our first Google Map</title>
    <script src="http://maps.google.com/maps?file=api&v=2&key=MYKEY"
    type="text/javascript"></script>
    <script type="text/javascript">

    //<![CDATA[

      function load() {
        if (GBrowserIsCompatible()) {
        var map = new GMap2(document.getElementById("map"));
        map.setCenter(new GLatLng(-41.5, -185), 5);
       }
      }
    //]]>
    </script>
    </head>
    <body onload="load()" onunload="GUnload()">
    <div id="map" style="width: 420px; height: 420px"></div>
    </body>
    </html>

    The first thing to notice here is the code that fetches the actual JavaScript code from the Google Maps server. You need to supply your key here for the code the work.


    <script src="http://maps.google.com/maps?file=api&v=2&key=MYKEY"
    type="text/javascript">
    </script>


    Next comes the code that actually downloads the map from the server.


    <script type="text/javascript">
    //<![CDATA[
      function load() {
      if (GBrowserIsCompatible()) {
       var map = new GMap2(document.getElementById("map"));
        map.setCenter(new GLatLng(-41.5, -187.5), 5);
       }
      }
    //]]>
    </script>


    Finally, in the body, we display the map. The size and shape of the map are taken from the corresponding HTML element. The map is initialized when the page is loaded (via the onload event). In addition, when the user leaves the page, the GUnload() method is called (via the onunload event). This cleans up the map data structure in order to avoid memory leak problems that occur in Internet Explorer.


    <body onload="load()" onunload="GUnload()">
    <div id="map" style="width: 420px; height: 420px"></div>
    </body>


    Panning and Zooming
    Now that we can successfully display a map, let's try to add some zoom functionality. The Google Maps API lets you add a number of different controls to your map, including panning and zooming tools, a map scale, and a set of buttons letting you change between Map and Satellite views. In our example, we'll add a small pan/zoom control and an "Overview map" control, which places a small, collapsible overview map. You add controls using the addControl() method, as shown here:


    function load() {
      if (GBrowserIsCompatible()) {
       var map = new GMap2(document.getElementById("map"));
       map.setCenter(new GLatLng(-41.5, -187.5), 5);
        map.addControl(new GSmallMapControl());
        map.addControl(new GOverviewMapControl());
      }
    }

    dzone.com