ScheduledExecutorService java.util.concurrent Tutorial

The java.util.concurrent.ScheduledExecutorService is anExecutorService which can schedule tasks to run after a delay, or to execute repeatedly. Tasks are executed asynchronously by a worker thread, and not by the thread handing the task to the ScheduledExecutorService.

ScheduledExecutorService Implementations

Since ScheduledExecutorService is an interface, you will have to use its implementation in the java.util.concurrent package, in order to use it.ScheduledExecutorService as the following implementation:

  • ScheduledThreadPoolExecutor

Creating a ScheduledExecutorService

How you create an ScheduledExecutorService depends on the implementation you use. However, you can use the Executors factory class to create ScheduledExecutorService instances too. Here is an example:

ScheduledExecutorService scheduledExecutorService =

        Executors.newScheduledThreadPool(5);

ScheduledExecutorService Usage

Once you have created a ScheduledExecutorService you use it by calling one of its methods:

  • schedule (Callable task, long delay, TimeUnit timeunit)
  • schedule (Runnable task, long delay, TimeUnit timeunit)
  • scheduleAtFixedRate (Runnable, long initialDelay, long period, TimeUnit timeunit)
  • scheduleWithFixedDelay (Runnable, long initialDelay, long period, TimeUnit timeunit)

I will briefly cover each of these methods below.

schedule (Callable task, long delay, TimeUnit timeunit)

This method schedules the given Callable for execution after the given delay.

The method returns a ScheduledFuture which you can use to either cancel the task before it has started executing, or obtain the result once it is executed.

Here is an example:

ScheduledExecutorService scheduledExecutorService =
        Executors.newScheduledThreadPool(5);

ScheduledFuture scheduledFuture =
    scheduledExecutorService.schedule(new Callable() {
        public Object call() throws Exception {
            System.out.println("Executed!");
            return "Called!";
        }
    },
    5,
    TimeUnit.SECONDS);

System.out.println("result = " + scheduledFuture.get());

scheduledExecutorService.shutdown();

This example outputs:

Executed!
result = Called!

schedule (Runnable task, long delay, TimeUnit timeunit)

This method works like the method version taking a Callable as parameter, except a Runnable cannot return a value, so the ScheduledFuture.get()method returns null when the task is finished.

scheduleAtFixedRate (Runnable, long initialDelay, long period, TimeUnit timeunit)

This method schedules a task to be executed periodically. The task is executed the first time after the initialDelay, and then recurringly every time theperiod expires.

If any execution of the given task throws an exception, the task is no longer executed. If no exceptions are thrown, the task will continue to be executed until the ScheduledExecutorService is shut down.

If a task takes longer to execute than the period between its scheduled executions, the next execution will start after the current execution finishes. The scheduled task will not be executed by more than one thread at a time.

scheduleWithFixedDelay (Runnable, long initialDelay, long period, TimeUnit timeunit)

This method works very much like scheduleAtFixedRate() except that theperiod is interpreted differently.

In the scheduleAtFixedRate() method the period is interpreted as a delay between the start of the previous execution, until the start of the next execution.

In this method, however, the period is interpreted as the delay between the endof the previous execution, until the start of the next. The delay is thus between finished executions, not between the beginning of executions.

 

Source : http://tutorials.jenkov.com/java-util-concurrent/scheduledexecutorservice.html

Advertisements

ExecutorService of java.util.concurrent Tutorial

The java.util.concurrent.ExecutorService interface represents an asynchronous execution mechanism which is capable of executing tasks in the background.

An ExecutorService is thus very similar to a thread pool. In fact, the implementation of ExecutorService present in the java.util.concurrentpackage is a thread pool implementation.

Here is a diagram illustrating a thread delegating a task to an ExecutorServicefor asynchronous execution:

A thread delegating a task to an ExecutorService for asynchronous execution.
A thread delegating a task to an ExecutorService for asynchronous execution.

 

ExecutorService Implementations

Since ExecutorService is an interface, you need to its implementations in order to make any use of it. The ExecutorService has the following implementation in the java.util.concurrent package:

 

Creating an ExecutorService

How you create an ExecutorService depends on the implementation you use. However, you can use the Executors factory class to createExecutorService instances too. Here are a few examples:

ExecutorService executorService1 = Executors.newSingleThreadExecutor();

ExecutorService executorService2 = Executors.newFixedThreadPool(10);

ExecutorService executorService3 = Executors.newScheduledThreadPool(10);

 

ExecutorService Usage

There are a few different ways to delegate tasks for execution to anExecutorService:

  • execute(Runnable)
  • submit(Runnable)
  • submit(Callable)
  • invokeAny(…)
  • invokeAll(…)

I will take a look at each of these methods in the following sections.

 

execute(Runnable)

The execute(Runnable) method takes a java.lang.Runnable object, and executes it asynchronously. Here is an example.

ExecutorService executorService = Executors.newSingleThreadExecutor();

executorService.execute(new Runnable() {
    public void run() {
        System.out.println("Asynchronous task");
    }
});

executorService.shutdown();

There is no way of obtaining the result of the executed Runnable, if necessary. You will have to use a Callable for that (explained in the following sections).

 

submit(Runnable)

The submit(Runnable) method also takes a Runnable implementation, but returns a Future object. This Future object can be used to check if theRunnable as finished executing.

Here is a submit() example:

Future future = executorService.submit(new Runnable() {
    public void run() {
        System.out.println("Asynchronous task");
    }
});

future.get();  //returns null if the task has finished correctly.

 

submit(Callable)

The submit(Callable) method is similar to the submit(Runnable) method except for the type of parameter it takes. The Callable instance is very similar to a Runnable except that its call() method can return a result. TheRunnable.run() method cannot return a result.

The Callable‘s result can be obtained via the Future object returned by thesubmit(Callable) method. Here is a code example:

Future future = executorService.submit(new Callable(){
    public Object call() throws Exception {
        System.out.println("Asynchronous Callable");
        return "Callable Result";
    }
});

System.out.println("future.get() = " + future.get());

The above code example will output this:

Asynchronous Callable
future.get() = Callable Result

 

invokeAny()

The invokeAny() method takes a collection of Callable objects, or subinterfaces of Callable. Invoking this method does not return a Future, but returns the result of one of the Callable objects. You have no guarantee about which of the Callable‘s results you get. Just one of the ones that finish.

If one of the tasks complete (or throws an exception), the rest of the Callable‘s are cancelled.

Here is a code example:

ExecutorService executorService = Executors.newSingleThreadExecutor();

Set<Callable<String>> callables = new HashSet<Callable<String>>();

callables.add(new Callable<String>() {
    public String call() throws Exception {
        return "Task 1";
    }
});
callables.add(new Callable<String>() {
    public String call() throws Exception {
        return "Task 2";
    }
});
callables.add(new Callable<String>() {
    public String call() throws Exception {
        return "Task 3";
    }
});

String result = executorService.invokeAny(callables);

System.out.println("result = " + result);

executorService.shutdown();

This code example will print out the object returned by one of the Callable‘s in the given collection. I have tried running it a few times, and the result changes. Sometimes it is “Task 1”, sometimes “Task 2” etc.

 

invokeAll()

The invokeAll() method invokes all of the Callable objects you pass to it in the collection passed as parameter. The invokeAll() returns a list of Futureobjects via which you can obtain the results of the executions of each Callable.

Keep in mind that a task might finish due to an exception, so it may not have “succeeded”. There is no way on a Future to tell the difference.

Here is a code example:

ExecutorService executorService = Executors.newSingleThreadExecutor();

Set<Callable<String>> callables = new HashSet<Callable<String>>();

callables.add(new Callable<String>() {
    public String call() throws Exception {
        return "Task 1";
    }
});
callables.add(new Callable<String>() {
    public String call() throws Exception {
        return "Task 2";
    }
});
callables.add(new Callable<String>() {
    public String call() throws Exception {
        return "Task 3";
    }
});

List<Future<String>> futures = executorService.invokeAll(callables);

for(Future<String> future : futures){
    System.out.println("future.get = " + future.get());
}

executorService.shutdown();

 

Closing an ExecutorService

When you are done using the ExecutorService you should shut it down, so the threads do not keep running.

For instance, if your application is started via a main() method and your main thread exits your application, the application will keep running if you have an activeExexutorService in your application. The active threads inside thisExecutorService prevents the JVM from shutting down.

To terminate the threads inside the ExecutorService you call its shutdown()method. The ExecutorService will not shut down immediately, but it will no longer accept new tasks, and once all threads have finished current tasks, theExecutorService shuts down. All tasks submitted to the ExecutorServicebefore shutdown() is called, are executed.

If you want to shut down the ExecutorService immediately, you can call theshutdownNow() method. This will attempt to stop all executing tasks right away, and skips all submitted but non-processed tasks. There are no guarantees given about the executing tasks. Perhaps they stop, perhaps the execute until the end. It is a best effort attempt.

Source : http://tutorials.jenkov.com/java-util-concurrent/executorservice.html

What Is Hash Code?

If you want to file something away for later retrieval, it can be faster if you file it numerically rather than by a long alphabetic key. A hashCode is a way of computing a small (32-bit) digest numeric key from a long String or even an arbitrary clump of bytes. The numeric key itself is meaningless and the hashCode functions for computing them can look a bit insane. However, when you go to look for something, you can do the same digest calculation on the long alphabetic key you are looking for, and no matter how bizarre an algorithm you used, you will calculate the same hashCode, and will be able to look up numerically with it. Of course there is always the possibility two different Strings will have the same digest hashCode. However, even then, all is not lost; it greatly narrows down the search, hence speeding it up. A Hashtable goes a step further, scrunching down the hashCode even further to an even smaller number that it can use to directly index an array, usually by dividing it by some (ideally prime) number and taking the remainder.

I saw this introduction in one of the site that I was surfing. The main idea of hashing was exactly this. In Java 1.0.x and 1.1, String.hashCode function was working by sampling every nth character. However, this was slowing down the Hashtable lookup. With Java 1.2, function has been improved to multiply the result by 31 then add the next character in sequence. This was much slower, but much secure to avoid the collisions. For Object.hashCode things were almost same. Then Sun came up with a much wider spec which 1.4 implementation made sense after.

Here is the spec for that:

  • Whenever it is invoked on the same object more than once during an execution of a Java application, the hashCode method must consistently return the same integer, provided no information used in equals comparisons on the object is modified. This integer need not remain consistent from one execution of an application to another execution of the same application.
  • If two objects are equal according to the equals(Object) method, then calling the hashCode method on each of the two objects must produce the same integer result.
  • It is not required that if two objects are unequal according to the equals(java.lang.Object) method, then calling the hashCode method on each of the two objects must produce distinct integer results. However, the programmer should be aware that producing distinct integer results for unequal objects may improve the performance of hashtables.

As compared to the general contract specified by the equals method, thehashCode simply states two important requirements that must be met while implementing the hashCode method.

  1. Consistency during same execution – Firstly, it states that the hash code returned by the hashCode method must be consistently the same for multiple invocations during the same execution of the application as long as the object is not modified to affect the equals method.
  2. Hash Code & Equals relationship – The second requirement of the contract is thehashCode counterpart of the requirement specified by the equals method. It simply emphasizes the same relationship – equal objects must produce the same hash code. However, the third point elaborates that unequal objects need not produce distinct hash codes.

Calculating Aggregate hashCodes with XOR

XOR is one of the most famous method to calculate hashes. To calculate the hash of two fields, the basic idea is calculating each fields hashCode separately and then XORing them with ^ operator. If the field is array or list, you could XOR all the values together. However, there are some pros and cons:

Pros

  • It does not depend on order of computation.
  • It does not “waste” bits. If you change even one bit in one of the components, the final value will change.
  • It is quick, a single cycle on even the most primitive computer.
  • It preserves uniform distribution. If the two pieces you combine are uniformly distributed so will the combination be. In other words, it does not tend to collapse the range of the digest into a narrower band.

Cons

  • It treats identical pairs of components as if they were not there.
  • It ignores order. A ^ B is the same a B ^ A. If order matters, XOR would not be suitable to compute the hashCode for a List.
  • If the values to be combined are only 8 bits wide, there will be effectively only 8 bits in the XORed result. In contrast, multiplying by a prime and adding would scramble the entire 32 bits of the result, giving a much broader range of possible values, hence greater chance that each hashCode will unique to a single Object. In other words, it tends to expand the range of the digest into the widest possible band (32 bits).

Okay, that’s enough technical information. Let’s get to the code. For the first example, let’s assume that we have two string fields in our object. Because we don’t have any id field, these strings should be our identities to generatehashCode. To do that, we would pick two prime numbers (in this case it would be 17 and 37):

 

/**
* Hash code that combines two strings.
*
* @return a hash code value on the pair of strings.
*/
public int hashCode() {
int result = 17;

result = 37 * result + string1.hashCode();
result = 37 * result + string2.hashCode();
// etc for all fields in the object

return result;
}

And giving String.hashCode would be also useful. This is just a rough impelementation, but it reflects how it works:

 

/**
* Simplified version of how String.hashCode works.
* For the actual version see src.jar java/lang/String.java file.
*
* @return a hash code value for this String.
*/
public int hashCode() {
// inside the String is a char[] that holds the characters.
int hash = 0;
int len = char.length();

for ( int i=0; i<len; i++) {
hash = 31 * hash + val[ i ];
}

return hash;
}

Here is another one. This is much faster and more broader, you can apply this one to bytes, short, chars, ints, arrays etc..

 

/**
* Java version of the assembler hashCode used in the
* BBL Forth compiler.
*
* @return a hash code value for the byte[] in this object.
*/
public int hashCode() {
//  byte[] holds the data.
int hash = 0;
int len = val.length();

for ( int i=0; i<len; i++ ) {
// rotate left and xor
// (very fast in assembler, a bit clumsy in Java)
hash <<= 1;

if ( hash < 0 ) {
hash |= 1;
}

hash ^= val[i];
}

return hash;
}

That’s it for now till next time folks. I hope this article would be useful.

Source : http://isagoksu.com/2009/development/java/what-is-hash-code/

Java Virtual Machine, An inside story!!

ava Virtual Machine, or JVM as its name suggest is a “virtual” computer that resides in the “real” computer as a software process. JVM gives Java the flexibility of platform independence. Let us see first how exactly Java program is created, compiled and executed.

Java code is written in .java file. This code contains one or more Java language attributes like Classes, Methods, Variable, Objects etc. Javac is used to compile this code and to generate .class file. Class file is also known as “byte code“. The name byte code is given may be because of the structure of the instruction set of Java program. We will see more about the instruction set later in this tutorial. Java byte code is an input to Java Virtual Machine. JVM read this code and interpret it and executes the program.

java-program-execution

Java Virtual Machine

Java Virtual Machine like its real counter part, executes the program and generate output. To execute any code, JVM utilizes different components.

JVM is divided into several components like the stack, the garbage-collected heap, the registers and the method area. Let us see diagram representation of JVM.

The Stack

Stack in Java virtual machine stores various method arguements as well as the local variables of any method. Stack also keep track of each an every method invocation. This is called Stack Frame. There are three registers thats help in stack manipulation. They are vars, frame, optop. This registers points to different parts of current Stack.

There are three sections in Java stack frame:

Local Variables

The local variables section contains all the local variables being used by the current method invocation. It is pointed to by the vars register.

Execution Environment

The execution environment section is used to maintain the operations of the stack itself. It is pointed to by the frame register.

Operand Stack

The operand stack is used as a work space by bytecode instructions. It is here that the parameters for bytecode instructions are placed, and results of bytecode instructions are found. The top of the operand stack is pointed to by the optop register.

Method Area

This is the area where bytecodes reside. The program counter points to some byte in the method area. It always keep tracks of the current instruction which is being executed (interpreted). After execution of an instruction, the JVM sets the PC to next instruction. Method area is shared among all the threads of a process. Hence if more then one threads are accessing any specific method or any instructions, synchorization is needed. Synchronization in JVM is acheived through Monitors.

Garbage-collected Heap

The Garbage-collected Heap is where the objects in Java programs are stored. Whenever we allocate an object using new operator, the heap comes into picture and memory is allocated from there. Unlike C++, Java does not have free operator to free any previously allocated memory. Java does this automatically using Garbage collection mechanism. Till Java 6.0, mark and sweep algorithm is used as a garbage collection logic. Remember that the local object reference resides on Stack but the actual object resides in Heap only. Also, arrays in Java are objects, hence they also resides in Garbage-collected Heap.

Source : http://viralpatel.net/blogs/java-virtual-machine-an-inside-story/

Java Heap Space: What is it?

java-memory

Are you new to Java and Java EE? Looking for a simple high level overview of the Java Heap Space or just to improve your Java Heap knowledge? This post is for you.

 Background
 When learning Java for the first time, a lot of focus is often spent on the Java language itself, Object-oriented programming principles, design patterns, compilation etc. and not so much on the Java VM itself such as the Java Heap memory management, garbage collection, performance tuning which are often considered “advanced” topics.
A beginner Java or Java EE programmer ends up creating his first program or Web application. Java Heap memory problems are then often observed such asOutOfMemoryError which can be quite challenging for Java beginners or even intermediates to troubleshoot.
Sounds familiar?
Java Heap Space – Overview & life cycle
Proper knowledge of the Java VM Heap Space is critical; including for Java beginner so my recommendation to you is to learn these principles at the same time you learn the Java language technicalities.
Your Java VM is basically the foundation of your Java program which provides you with dynamic memory management services, garbage collection, Threads, IO and native operations and more.
The Java Heap Space is the memory “container” of you runtime Java program which provides to your Java program the proper memory spaces it needs (Java Heap, Native Heap) and managed by the JVM itself.
Your Java program life cycle typically looks like this:
  • Java program coding (via Eclipse IDE etc.) e.g. HelloWorld.java
  • Java program compilation (Java compiler or third party build tools such as Apache Ant, Apache Maven..) e.g. HelloWord.class
  • Java program start-up and runtime execution e.g. via your HelloWorld.main() method

 

The Java Heap space is mainly applicable and important for the third step: runtime execution. For the HotSpot VM, the Java Heap Space is split in 3 silos:
  • Java Heap for short & long lived objects (YoungGen & OldGen spaces)
  • PermGen space
  • Native Heap
Now let’s dissect your HelloWorld.class program so you can better understand.
  • At start-up, your JVM will load and cache some of your static program and JDK libraries to the Native Heap, including native libraries, Mapped Files such as your program Jar file(s), Threads such as the main start-up Thread of your program etc.
  • Your JVM will then store the “static” data of your HelloWorld.class Java program to the PermGen space (Class metadata, descriptors..)
  • Once your program is started, the JVM will then manage and dynamically allocate the memory of your Java program to the Java Heap (YoungGen & OldGen). This is why it is so important that you understand how much memory your Java program needs to you can properly fine-tuned the capacity of your Java Heap controlled via –Xms & -Xmx JVM parameters. Profiling, Heap Dump analysis allow you to determine your Java program memory footprint
  • Finally, the JVM has to also dynamically release the memory from the Java Heap Space that your program no longer need; this is called the garbage collection process. This process can be easily monitored via the JVM verbose GC or a monitoring tool of your choice such as JConsole
Sounds complex? The good news is that the JVM maturity has improved significantly over the last 10 years and provides you with out-of-the-box tools allowing you to understand your Java program Java Heap allocation monitor it and fine-tuned.
Src : http://javaeesupportpatterns.blogspot.in/2012/02/java-heap-space-what-is-it.html

Java : Local variables inside a loop and performance

Overview

Sometimes a question comes up about how much work allocating a new local variable takes.  My feeling has always been that the code becomes optimised to the point where this cost is static i.e. done once, not each time the code is run.

Recently Ishwor Gurung suggested considering moving some local variables outside a loop. I suspected it wouldn’t make a difference but I had never tested to see if this was the case.

The test.

This is the test I ran

public static void main(String… args) {
for (int i = 0; i < 10; i++) {
testInsideLoop();
testOutsideLoop();
}
}

private static void testInsideLoop() {
long start = System.nanoTime();
int[] counters = new int[144];
int runs = 200 * 1000;
for (int i = 0; i < runs; i++) {
int x = i % 12;
int y = i / 12 % 12;
int times = x * y;
counters[times]++;
}
long time = System.nanoTime() – start;
System.out.printf(“Inside: Average loop time %.1f ns%n”, (double) time / runs);
}

private static void testOutsideLoop() {
long start = System.nanoTime();
int[] counters = new int[144];
int runs = 200 * 1000, x, y, times;
for (int i = 0; i < runs; i++) {
x = i % 12;
y = i / 12 % 12;
times = x * y;
counters[times]++;
}
long time = System.nanoTime() – start;
System.out.printf(“Outside: Average loop time %.1f ns%n”, (double) time / runs);
}

and the output ended with

Inside: Average loop time 3.6 ns
Outside: Average loop time 3.6 ns
Inside: Average loop time 3.6 ns
Outside: Average loop time 3.6 ns

Increasing the time the test takes to 100 million iterations made little difference to the results.

Inside: Average loop time 3.8 ns
Outside: Average loop time 3.8 ns
Inside: Average loop time 3.8 ns
Outside: Average loop time 3.8 ns

Replacing the modulus and multiplication with >>, &, + I got

int x = i & 15;
int y = (i >> 4) & 15;
int times = x + y;

prints

Inside: Average loop time 1.2 ns
Outside: Average loop time 1.2 ns
Inside: Average loop time 1.2 ns
Outside: Average loop time 1.2 ns

While modulus is relatively expensive the resolution of the test is to 0.1 ns or less than 1/3 of a clock cycle. This would show any difference between the two tests to an accuracy of this.

Using Caliper

As @maaartinus comments, Caliper is a micro-benchmarking library so I was interested in how much slower it might be that doing the code by hand.

public static void main(String… args) {
Runner.main(LoopBenchmark.class, args);
}

public static class LoopBenchmark extends SimpleBenchmark {
public void timeInsideLoop(int reps) {
int[] counters = new int[144];
for (int i = 0; i < reps; i++) {
int x = i % 12;
int y = i / 12 % 12;
int times = x * y;
counters[times]++;
}
}

public void timeOutsideLoop(int reps) {
int[] counters = new int[144];
int x, y, times;
for (int i = 0; i < reps; i++) {
x = i % 12;
y = i / 12 % 12;
times = x * y;
counters[times]++;
}
}
}

The first thing to note is the code is shorter as it doesn’t include timing and printing boiler plate code.  Running this I get on the same machine as the first test.

0% Scenario{vm=java, trial=0, benchmark=InsideLoop} 4.23 ns; σ=0.01 ns @ 3 trials
50% Scenario{vm=java, trial=0, benchmark=OutsideLoop} 4.23 ns; σ=0.01 ns @ 3 trials

benchmark   ns linear runtime
InsideLoop 4.23 ==============================
OutsideLoop 4.23 =============================

vm: java
trial: 0

Replacing the modulus with shift and and

0% Scenario{vm=java, trial=0, benchmark=InsideLoop} 1.27 ns; σ=0.01 ns @ 3 trials
50% Scenario{vm=java, trial=0, benchmark=OutsideLoop} 1.27 ns; σ=0.00 ns @ 3 trials

benchmark   ns linear runtime
InsideLoop 1.27 =============================
OutsideLoop 1.27 ==============================

vm: java
trial: 0

This is consistent with the first result and only about 0.4 – 0.6 ns slower for one test. (about two clock cycles), and next to no difference for the shift, and, plus test.  This may be due to the way calliper samples the data but doesn’t change the outcome.

It is worth nothing that when running real programs, you typically get longer times than a micro-benchmark as the program will be doing more things so the caching and branch predictions is not as ideal.  A small over estimate of the time taken may be closer to what you can expect to see in a real program.

Conclusion

This indicated to me that in this case it made no difference.  I still suspect the cost of allocating local variables is don’t once when the code is compiled by the JIT and there is no per-iteration cost to consider.

 

Source: http://vanillajava.blogspot.ru/2012/12/local-variables-inside-loop-and.html

Cache is King by Steve Souders

I previously wrote that the keys to a fast web app are using Ajax, optimizing JavaScript, and better caching.

  • Using Ajax reduces network traffic to just a few JSON requests.
  • Optimizing JavaScript (downloading scripts asynchronously, grouping DOM modifications, yielding to the UI thread, etc.) allows requests to happen in parallel and makes rendering happen sooner.
  • Better caching means many of the web app’s resources are stored locally and don’t require any HTTP requests.

It’s important to understand where the benefits from each technique come into play. Using Ajax, for example, doesn’t make the initial page load time much faster (and often makes it slower if you’re not careful), but subsequent “pages” (user actions) are snappier. Optimizing JavaScript, on the other hand, makes both the first page view and subsequent pages faster. Better caching sits in the middle: The very first visit to a site isn’t faster, but subsequent page views are faster. Also, even after closing their browser the user gets a faster initial page when she returns to your site – so the performance benefit transcends browser sessions.

These web performance optimizations aren’t mutually exclusive – you should do them all! But I (and perhaps you) wonder which has the biggest impact. So I decided to run a test to measure these different factors. I wanted to see the benefits on real websites, so that pointed me to WebPagetest where I could easily do several tests across the Alexa Top 1000 websites. Since there’s no setting to “Ajaxify” a website, I decided instead to focus on time spent on the network. I settled on doing these four tests:

  • Baseline – Run the Alexa Top 1000 through WebPagetest using IE9 and a simulated DSL connection speed (1.5 Mbps down, 384 Kbps up, 50ms RTT). Each URL is loaded three times and the median (based on page load time) is the final result. We only look at the “first view” (empty cache) page load.
  • Fast Network – Same as Baseline except use a simulated FIOS connection: 20 Mbps down, 5 Mbps up, 4ms RTT.
  • No JavaScript – Same as Baseline except use the new “noscript” option to the RESTful API (thanks Pat!). This is the same as choosing the browser option to disable JavaScript. This isn’t a perfect substitute for “optimizing JavaScript” because any HTTP requests that are generated from JavaScript are skipped. On the other hand, any resources inside NOSCRIPT tags are added. We’ll compare the number of HTTP requests later.
  • Primed Cache – Same as Baseline except only look at “repeat view”. This test looks at the best case scenario for the benefits of caching given the caching headers in place today. Since not everything is cacheable, some network traffic still occurs.

Which test is going to produce the fastest page load times? Stop for a minute and write down your guess. I thought about it before I started and it turned out I was wrong.

The Results

This chart shows the median and 95th percentile window.onload time for each test. The Baseline median onload time is 7.65 seconds (95th is 24.88). Each of the optimizations make the pages load significantly faster. Here’s how they compare:

  1. Primed Cache is the fastest test at 3.46 seconds (95th is 12.00).
  2. Fast Network is second fastest at 4.13 seconds (95th is 13.28).
  3. No JavaScript comes in third at 4.74 seconds (95th is 15.76).

I was surprised that No JavaScript wasn’t the fastest. Disabling JavaScript removes the blocking behavior that can occur when downloading resources. Even though scripts in IE9 download in parallel with most other resources (see Browserscope) they cause blocking for font files and other edge cases. More importantly, disabling JavaScript reduces the number of requests from the Baseline of 90 requests down to 59, and total transfer size drops from 927 kB to 577 kB.

The improvement from a Fast Network was more than I expected. The number of requests and total transfer size were the same as the Baseline, but the median onload time was 46% faster, from 7.65 seconds down to 4.13 seconds. This shows that network conditions (connection speed and latency) have a significant impact. (No wonder it’s hard to create a fast mobile experience.)

The key to why Primed Cache won is the drop in requests from 90 to 32. There were 58 requests that were read from cache without generating any HTTP traffic. The HTTP Archive max-age chart for the Alexa Top 1000 shows that 59% of requests are configured to be cached (max-age > 0). Many of these have short cache times of 10 minutes or less, but since the “repeat view” is performed immediately these are read from cache – so it truly is a best case scenario. 59% of 90 is 53. The other 5 requests were likely pulled from cache because of heuristic caching.

Although the results were unexpected, I’m pleased that caching turns out to be such a significant (perhaps the most significant) factor in making websites faster. I recently announced my focus on caching. This is an important start – raising awareness about the opportunity that lies right in front of us.

The number of resource requests was reduced by 64% using today’s caching headers, but only because “repeat view” runs immediately. If we had waited one day, 19% of the requests would have been expired generating 17 new If-Modified-Since requests. And it’s likely that some of the 5 requests that benefited from heuristic caching would have generated new requests. Instead we need to move in the other direction makingmore resources cacheable for longer periods of time. And given the benefits of loading resources quickly we need to investigate ways to prefetch resources before the browser even asks for them.

Source :

1. http://www.stevesouders.com/blog/2012/10/11/cache-is-king/

2. http://www.youtube.com/watch?v=HKNZ-tQQnSY

3. http://www.slideshare.net/souders/cache-is-king

 

 

Why String is immutable or final in Java?

Though there could be many possible answer for this question and only designer of String class can answer this , I think below two does make sense

1)Imagine StringPool facility without making string immutable , its not possible at all because in case of string pool one string object/literal e.g. “Test” has referenced by many reference variables , so if any one of them change the value others will be automatically gets affected i.e. lets say

String A = “Test”
String B = “Test”

Now String B called “Test”.toUpperCase() which change the same object into “TEST” , so A will also be “TEST” which is not desirable.

2)String has been widely used as parameter for many java classes e.g. for opening network connection you can pass hostname and port number as string , you can pass database URL as string for opening database connection, you can open any file in Java by passing name of file as argument to File I/O classes.

In case if String is not immutable , this would lead serious security threat , I mean some one can access to any file for which he has authorization and then can change the file name either deliberately or accidentally and gain access of those file. This is some time asked as Why Char array is better than String for Storing password in Java in interviews as well.

3)Since String is immutable it can safely shared between many threads ,which is very important for multi threaded programming and to avoid any synchronization issues in Java, Immutability also makes String instance thread-safe in Java, means you don’t need to synchronize String operation externally. Another important point to note about String is memory leak caused by SubString, which is not a thread related issues but something to be aware of.

4) Another reason of Why String is immutable in Java is to allow String to cache its hashcode , being immutable String in Java caches its hashcode and do not calculate every time we call hashcode method of String, which makes it very fast as hashmap key to be used in hashmap in Java.  This one is also suggested by  Jaroslav Sedlacek in comments below. In short because String is immutable, no one can change its contents once created which guarantees hashCode of String to be same on multiple invocation.

5) Another good reason of Why String is immutable in Java suggested by Dan Bergh Johnsson on comments is: The absolutely most important reason that String is immutable is that it is used by the class loading mechanism, and thus have profound and fundamental security aspects.
Had String been mutable, a request to load “java.io.Writer” could have been changed to load “mil.vogoon.DiskErasingWriter”

I believe there could be some more very convincing reasons also , Please post those reasons as comments and I will include those on this post.

I think above reason holds good for another java interview questions “Why String is final in Java”  also to be immutable you have to be final so that your subclass doesn’t break immutability.  what do you guys think ?

Source : http://javarevisited.blogspot.com/2010/10/why-string-is-immutable-in-java.html#ixzz2IlmuOgOu

 

 

Geek Time with Josh Bloch -Chief Java Architect at Google


In addition to being known as “The Mother of Java,” Josh Bloch is also the Chief Java Architect at Google. Josh sat down with fellow Open Source Programs Office colleague Jeremy Allison to chat about APIs, the importance of openness, and the successes and failures of Java.

Some highlights from their conversation:

(0:45) Josh describes what he does for Java and at Google.

(1:59) Jeremy expresses his disappointments with Java, based on the early potential that it showed. Josh responds with an explanation of some of the difficulties that Java faced.

(4:48) Josh and Jeremy talk about some of the factors that contributed to Java’s successes.

(9:51) Josh’s explains his philosophy towards creating APIs.

(14:30) Josh talks about the APIs that he’s most proud of.

(19:45) Josh and Jeremy discuss the importance of reading other people’s code, and the impact of Sun’s decision to put the code and bug database for Java on the web.

(24:00) Josh explains how he came to be in his current position and gives advice for others who are looking for ways to get started programming.

(27:32) Josh wrote the standard Java best-practices guide, Effective Java, and co-authored two other Java books: Java Puzzlers, and Java Concurrency in Practice. As a special treat for this blog’s readers, Josh is offering signed copies of his books for the first two people with correct responses to the following puzzle. Submit your answer as a comment on this post, then use the same Blogger ID to leave a separate comment with your contact info and inscription request (for your privacy, the comment with your contact info will not be published).

Source : http://google-opensource.blogspot.in/2011/04/geek-time-with-josh-bloch.html

Selecting your Collections library

toolboxIs this really something you should bother? Is there something fundamentally wrong with java.util.ArrayList and java.util.HashMap? For most of the source code out there the answer is  – no; those implementations are perfectly OK. But as always, the devil is in the details. And there exist situations when either the feature set built into the Collections API is not sufficient enough or you find the overhead provided by the standard collections to be way too high for your likings. In the past years we have continuously stumbled upon the very same problems.  And thought it would be nice to share the experience – hopefully it can save somebody a day or two. Either via avoiding yet another bidirectional Map implementation or understanding why his HashSet consumes 10x more memory than intended. We have divided the review into two different groups of Collection libraries. First of them provides additional features to the standard Collections API. In this group we have players such as Guava and Apache Commons Collections. Another set of Collection libraries works with some aspect of performance. In this group we see libraries like Trove, fastutil and Huge Collections. We start our overview with feature-adding libraries and then move into performance-oriented landscape.

In the past years we have continuously stumbled upon the very same problems.  And thought it would be nice to share the experience – hopefully it can save somebody a day or two. Either via avoiding yet another bidirectional Map implementation or understanding why his HashSet consumes 10x more memory than intended. We have divided the review into two different groups of Collection libraries. First of them provides additional features to the standard Collections API. In this group we have players such as Guava and Apache Commons Collections. Another set of Collection libraries works with some aspect of performance. In this group we see libraries like Trove, fastutil and Huge Collections. We start our overview with feature-adding libraries and then move into performance-oriented landscape.

Read more at: http://plumbr.eu/blog/selecting-your-collections-library
Or follow @JavaPlumbr on Twitter