vert.x - Putting 2 millions objects in Hazelcast cluster of 5 nodes - what optimization we should do -


we have 2 millions objects coming rest api in chunks of 500 objects in each api call (apprx 750 mb of data in total).

  • when put these objects in hazelcast cache following takes around 10 minutes – cpu 5-6% - makes sense because there 2 million blocking n/w calls.

    vertx.executeblocking {     for(2 million times) {         hazelcast.put(mapname, key, value)     } } 
  • when don’t use vertx’s “executeblocking” , rather following whole process finish in 10-15 seconds cpu reaches 80%. using hazelcast mancenter, see 2 millions objects reflected in cache within 10-15 seconds.

    for(2 million times) {     hazelcast.putasync(mapname, key, value) } 
  • when used #putall following cpu reaches 60%, better second approach. approach finishes in apprx 10 seconds.

    for(2 million objects in chunks of 500) {     hazelcast.putall(mapname, collection-of-500-objects) } 

any optimization guys recommend? wonder why hazelcast spiking cpu much.

fyi - think vertx.executeblocking executing piece of code asynchronously. using intel xeon 8 core cpu 12gb ram.

have @ imap.putall. allows putting data in chunks. said 2mln divided 500 objects chunk = 4k chunks

/**  * {@inheritdoc}  * <p>  *      no atomicity guarantees given. in case of failure  *      of key/value-pairs written, while others not.  * </p>  * <p>  *      <p><b>warning:</b></p>  *      if have set ttl key, ttl remains unchanged , entry  *      expire when initial ttl has elapsed.  * </p>  */ void putall(map<? extends k, ? extends v> m); 

Comments

Popular posts from this blog

python Tkinter Capturing keyboard events save as one single string -

android - InAppBilling registering BroadcastReceiver in AndroidManifest -

javascript - Z-index in d3.js -