WSO2 has recently performed a comparison study by considering various metrics, and in all the criteria WSO2 ESB has been well performed!! It's extra ordinary speed against to a number of leading open source ESB. Refer WSO2 article on WSO2 ESB Performance Round 7.5 this for more information.
This comparison was made with WSO2 ESB 4.8.1 against Mule ESB 3.4.0, Talend ESB 5.3.1 and Ulatra ESB 2.0.0. The below shown in the final observations during this.
The above results clearly shows that WSO2 ESB performs far more than other leading open source ESBs.
Sinthuja's Tech Path..
Friday, February 28, 2014
WSO2 Con Asia 2014 in Colombo!
After world wide successful conferences in London and San Francisco, WSO2 brings the same great talks back to Colombo with WSO2 Con Asia 2014! This is the fifth user conference of WSO2 and it will be held March 24-26, 2014 at Waters Edge in Battaramulla, Sri Lanka. WSO2Con Asia 2014 is one of three regional WSO2 user conferences being presented in 2014. The others are WSO2Con Europe 2014, running June 16-18, in Barcelona, Spain, and WSO2Con USA 2014, scheduled for October 27-29 in San Francisco, California. To learn more about all three events, visit http://wso2con.com.
WSO2 yesterday announced the keynote speakers for WSO2Con Asia 2014, including executives from Dialog Axiata PLC/Axiata Group Berhad, Commonwealth Bank, and WSO2. Collectively, these featured speakers will share their vision for the connected business, , as well as introduce case studies, concepts and technologies for achieving successful real-world deployments. More information about the speakers can be found in the official WSO2 Con Asia 2014 speakers' page.
WSO2 Con Asia is 2 day conference, and also it will be having additional Pre-Conference tutorial session on building enterprise apps, building jaggery js for mobile and web apps, comprehensive cloud strategy for enterprise, API management, Big data with WSO2 CEP and WSO2 BAM, Security, SOA focused enterprise architecture, etc. Throughout the two days of conference, there will be many interesting topics discussed, including cloud, mobile, APIs, social media and open source, is enabling enterprises to create new connections across employees, customers and partners. And also at the end of each day there will be panel discussion to discuss about various perspectives of IT in the current trend.
WSO2Con Asia 2014 is focused on empowering enterprises with the technology and implementation insights to succeed in the era of the connected business. Keynote presenters will explore the emerging technologies that are enabling the connected business, as well as case studies and key concepts around the practical application of these technologies to accelerate development and build a lean and agile connected business environment. The full agenda can be viewed at http://asia14.wso2con.com/agenda.
WSO2 provides the full middle ware open source platform to build, integrate and manage your enterprise APIs, applications, and Web services on-premises, in the cloud, and on mobile devices. And WSO2 Con Asia 2014 is a great opportunity to learn how connected businesses can grow revenue and outperform peers by increasing customer engagement, enhancing productivity, and seizing market opportunity. Have fun being in WSO2 Con Asia 2014, and expand your knowledge!
Monday, March 4, 2013
Broken pipe exception when connecting to cassandra
Recently I required to WSO2 BAM receiver with high load. And during that I experienced the below exception.
[2013-03-04 15:56:43,010] ERROR {me.prettyprint.cassandra.connection.client.HThriftClient} - Could not flush transport (to be expected if the pool is shutting down) in close for client: CassandraClient<cassandra03:9170-1581>
org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe
at org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:147)
at org.apache.thrift.transport.TFramedTransport.flush(TFramedTransport.java:156)
at me.prettyprint.cassandra.connection.client.HThriftClient.close(HThriftClient.java:98)
at me.prettyprint.cassandra.connection.client.HThriftClient.close(HThriftClient.java:26)
at me.prettyprint.cassandra.connection.HConnectionManager.closeClient(HConnectionManager.java:323)
at me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:272)
at me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:97)
at me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243)
at org.wso2.carbon.databridge.persistence.cassandra.datastore.CassandraConnector.commit(CassandraConnector.java:177)
at org.wso2.carbon.databridge.persistence.cassandra.datastore.CassandraConnector.insertEventList(CassandraConnector.java:402)
at org.wso2.carbon.databridge.datasink.cassandra.subscriber.BAMEventSubscriber.receive(BAMEventSubscriber.java:50)
at org.wso2.carbon.databridge.core.internal.queue.QueueWorker.run(QueueWorker.java:80)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
at org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:145)
... 17 more
[2013-03-04 15:56:43,011] ERROR {me.prettyprint.cassandra.connection.HConnectionManager} - MARK HOST AS DOWN TRIGGERED for host cassandra03(10.157.4.137):9170
[2013-03-04 15:56:43,011] ERROR {me.prettyprint.cassandra.connection.HConnectionManager} - Pool state on shutdown: <ConcurrentCassandraClientPoolByHost>:
After a doing more research on the cassandra.yaml configuration, i found out changing the below propeties:
thrift_framed_transport_size_in_mb: 15
thrift_max_message_length_in_mb: 16
Increasing the above parameters will solve the problem, ie, you can increase 'thrift_max_message_length_in_mb' to 64 and 'thrift_framed_transport_size_in_mb' to 60, which will help to get rid of the mentioned exception.
Tuesday, February 26, 2013
How to configure MySQL server in linux and connect from remote server?
I recently required to connect to start the MySQL server and connect it from another client machine remotely. It wasn't too easy as I expected and I came across couple of issues during this, and I thought of blog it as it'll be useful to for someone else also. :)
These are steps I followed during this.
Then I got the below error.
ERROR 2003 (HY000): Can't connect to MySQL server on 'x.x.x.x' (111)
After all the configuration above I could connect MySQL server remotely.:-)
Some useful tips:
- Start the my sql server ---> sudo service mysql start
- Stop the my sql server ---> sudo service mysql stop
These are steps I followed during this.
- Install the MySQL server.
- Connect to MySQL server with
- Did ifconfig from my machine to find out my ip-address.
- And then used that ip address to connect to the MySQL server
Then I got the below error.
ERROR 2003 (HY000): Can't connect to MySQL server on 'x.x.x.x' (111)
- The above error comes because of the bind-address of my sql server. In my.cnf file the bind-address of the my sql server has been mentioned and when the my sql server start up it'll bind to that address. By default the address in 127.0.0.1 which is the loop back address, and we can't connect via this address from another machine remotely. The address you specify in bind tells mysql where to listen. 0.0.0.0 is a special address, which means "bind to every available network".
- Then again when i tried to connect using the same command remotely I encountered the below error.
- After a bit of research, the fix in my case was to "GRANT" user root to connect to MySQL on any hosts. By default, user "root" was only allowed to connect to localhost and 127.0.0.1 hosts of MySQL.
#GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY '<roots-password>' WITH GRANT OPTION;
Some useful tips:
- Start the my sql server ---> sudo service mysql start
- Stop the my sql server ---> sudo service mysql stop
Thursday, February 14, 2013
Load Balancing Data Publishers and Sending Events to Multiple Receivers
WSO2 BAM/CEP has hight performance thrift based event receiving model, which basically receives the events via TCP. Even though thrift is high performance receiving protocol, load balancing the thrift events is problematic as you need tcp based load balancer rather http based load balancer. Therefore in WSO2 we have added support to have load balancing between Data bridge receivers (ie, WSO2 BAM servers, CEP servers) from the client side, by sending the events in a round robin manner to BAM servers, such that load of events will be balanced between them.
For this we have added a Wrapper class called LoadBalancingDataPublisher, which uses the AsyncDataPublisher in it. It not only load balances the events between the set of servers and also can send same events to some servers. All the capabilities of using the LoadBalancingDataPublisher is provided in BAM 2.2.0 documentation here, which explains the use cases of using the load balancing data publisher.
This provides more fail over handling also with load balancing, which can detect the node failure and stop further publishing for the dead node and it also recognozes the node startup and it starts load balancing the events from that instance.
I'll provide a more detailed description about using LoadDataPublisher to publish events to BAM/CEP in next article.
For this we have added a Wrapper class called LoadBalancingDataPublisher, which uses the AsyncDataPublisher in it. It not only load balances the events between the set of servers and also can send same events to some servers. All the capabilities of using the LoadBalancingDataPublisher is provided in BAM 2.2.0 documentation here, which explains the use cases of using the load balancing data publisher.
This provides more fail over handling also with load balancing, which can detect the node failure and stop further publishing for the dead node and it also recognozes the node startup and it starts load balancing the events from that instance.
I'll provide a more detailed description about using LoadDataPublisher to publish events to BAM/CEP in next article.
Wednesday, December 12, 2012
Non-blocking data publshing for BAM/CEP
You can publish the events to BAM/CEP
by using the DataPublisher.
AsyncDataPublisher is enhanced version of DataPublisher which
incorporates all the constructors and API of of general DataPublisher
and send events asynchronously. Thats is in general data publishers,
making the connection is synchronous/blocking therefore the network
latency might affect the connection time and publishing efficieny.
The AsyncDataPublisher connects to the receiver asynchronously and
cache and re-use the stream id in efficient manner.
There are mainly three steps involved
in creating the AsyncDataPublisher.
1.Create a AsyncDataPublisher Instance
with any of available constructors.
Eg: AsyncDataPublisher
asyncDataPublisher = new AsyncDataPublisher("url","userName",
"password");
2. Add the stream definition json
string which you would like to publish via the AsyncDataPublisher.
You can add any number of stream definitions and any time you can
include that. But make sure before you publish the event for a
specific stream definition, you should include it in the
AsyncDataPublisher.
Eg:
asyncDataPublisher.addStreamDefinition("stream def");
Eg: asyncDataPublisher.publish("stream
name", "stream version", metaDataObjectArray,
correlationDataObjectArray, payLoadDataObjectArray );
This will send events with additional
advantage as mentioned above.
Friday, October 26, 2012
How Install the Nvidia driver in Unbuntu Lenova T520?
Recently I required to change my hard disk to a new lenova machine. I'm using ubuntu. And every thing works well after I change the hard disk, but I had a problem with my screen resolution. It was set to 4:3 and that was the maximum resolution available. It was really hard for me to work with that less resolution.
And found the reason was the driver for my display, Nvidia is not detected correctly so that I installed the Nvidia driver again in my Ubuntu machine.
The following are the steps I followed to install the Nvidia driver in my machine.
After installing the driver, I restarted the machine and changed the resolution (Applications -> System Tools -> System Settings -> Displays) to 16:9, and solved the resolution problem :-)
And found the reason was the driver for my display, Nvidia is not detected correctly so that I installed the Nvidia driver again in my Ubuntu machine.
The following are the steps I followed to install the Nvidia driver in my machine.
1.
sudo add-apt-repository ppa:ubuntu-x-swat/x-updates
2.
sudo apt-get update
3.
sudo apt-get install nvidia-current
After installing the driver, I restarted the machine and changed the resolution (Applications -> System Tools -> System Settings -> Displays) to 16:9, and solved the resolution problem :-)
Subscribe to:
Posts (Atom)