Wednesday, October 25, 2006

Integration across the board

So I've come to a point where I have Windows services reading from ActiveMQ = AMQ (it's MOM software) and feeding a system via a COM connector. The COM connector seems to be holding up really well. Last night I was tossing messages at AMQ at a rate of 10 per second. These messages are about 150-200 characters in size. I tested 360010 message going through. The messages finally ended up in a database. This morning I checked and 360010 messages where in the table! Also the software I use to track the status of the AMQ showed that 360010 messages had been put on a queue, and the same amount read off. So things are looking up.

I used Ruby to send the messages onto AMQ. I think I have mentioned before that there is a simple messaging protocol called STOMP, which I used to connect to AMQ from Ruby. AMQ has a snappy transport connector which allows you to do this. AMQ also allows you to write your own transport connector, so if you don't like a protocol write your own :)

I'm testing changes I made to the STOMP Ruby client. I made it much easier to use the client to send messages that get marshalled in the AMQ STOMP transport connector. The client as it came did not make it possible to send what would be a JMS TextMessage in AMQ. Messages that came from the basic client came into AMQ and got marshalled into the JMS BinaryMessage type. The AMQ transport allows for marshalling to BinaryMessage or TextMessage, depending on STOMP header settings. With some simple Ruby coding I have a modified client that now has an extra method 'sendTextMessage'. When using this method the message in AMQ will be marshalled as a TextMessage! TextMessage fits our current purposes better than BinaryMessage.

So back to the testing. I have 4 producers spitting 20 messages a second onto a queue, and one consumer reading messages off of the queue as fast as it can. I'm up near 200k messages sent/received and counting. I'll see if things can break at some point :) I also have the Windows service coming on and off to take messages from this queue. This tests how well things are handled with consumers going down and coming online on the fly.

So the big picture is to be able to easily integrate a M$ product into a ESB using the AMQ messaging system. Rather than have to handle the problem of the M$ product dying and data being dropped we will have a scenario where messages are kept and fed over when the product comes back online. We should also have an easy time reusing bizz logic components in the ESB.

Saturday, October 21, 2006

More messaging fun!

In my last post I talked about using a STOMP client for Ruby. STOMP is a very light weight protocol for doing messaging. I have implemented an ESB (Servicemix) and messaging system (ActiveMQ) for my company. Our business depends on many programming languages, databases, software and such to keep things running. STOMP as I explained before allows the web guys at my work to message our system using Perl, Ruby, Python, C# and more :). This will allow for easy routing of web data through our ESB.

The goal for now is to set up orders to flow through the ESB for our web department. We are using this new piece of software to do all our CRM, product maintenance and other business functions. I have refered to this software as 'Magic' and will continue to here. Orders will need to flow into Magic somehow. Magic is M$ built, and I figured why not put the ActiveMQ NMS implementation to use. NMS is like a .Net version of JMS. The guys over on the ActiveMQ project created an implementation of NMS for ActiveMQ. This is very nice of them. I can now use either a STOMP client or NMS client to message with ActiveMQ from my C# code!

So at first I tried to create .dll libraries of the AMQ NMS implementation, then load those into Magic. Magic has CLR functionality in it's newest release. This means that the language you code with inside Magic can use objects written in C#, VB.net, Java, and on... The problem is the CLR implementation in Magic is lame! It does not support language features like events, delegates, virtual functions etc. So I cannot use the NMS code inside of Magic. Bummer. I mulled for a day about this one while working on other stuff. Then I found an idea.

Magic has a COM connector, which allows you to call pretty much any code written in Magic. This is nice for executing remote procedure calls to Magic. So my new solution is to write Windows services which do the messaging, then pass/pull data from Magic using the COM connector! I have already written my first service. It writes a simple XML doc to a queue on a separate machine every 30 seconds or so.

Now that I have this written I will be focusing on writing the COM connector code to push and pull messages from Magic. Pushing is the easy part as Asynchronous communication is easy to keep. As a message comes in, the service will grab it and push it over to Magic. Pulling data out may be harder. Right now I use a timer object in my service, to execute some functions every 30 seconds. I figure I may have to do this polling type thing to pull data. It sucks, but it is better than the alternative. In the long run it allows finer tuning of how the data will flow across our system.

Right now we are getting data out of Magic using web services. I'm not impressed with web services. The only time people ask me to write them is for improper purposes. When we need to get data out of Magic, we run batch processes which send the data to an external web service. Magic spams out large chunks of data to the web service, causing it to stress badly. But for the most part the web service sits there doing nutin :) It's not a very pretty picture. With messaging we can take the flow and streamline it. Messages flow out as they are created, not at batch time. The message consumer can pull the messages as it likes, rather than having to deal with a large volume being spamed to it. This relieves stress accross the system. very nice. I think my boss is convinced that this is the way we should go. I just need to get the many pieces of this together, so that we can do some testing.

Monday, October 16, 2006

STOMP

I started something in motion today at work. A thing that may revolutionize inter-departmental communciation. I am urging the Perl/Ruby hackers at my work to get involved with my Smix/Amq habit. They will be doing so using the Perl or Ruby client for STOMP. STOMP is a very simple protocol for doing messaging. When I say simple I mean it. Look at the code for the Ruby client. AMQ, my favorite implementation of the Java Messaging Service (JMS), offers an adapter so that you may handle STOMP messaging clients. SWEET!!!

So what does that mean for me? Well those web guys are Perl/Ruby/PHP masters but don't like the idea of getting their feet wet with Java. I wrote up some super simple Ruby scripts for doing STOMP messaging with AMQ. One pair of scripts reads and writes messages from/to a topic respectively. The other pair of scripts reads and writes message from/to a queue respectively. Publishing and consuming of messages happened very quickly. I'm hoping we can get some measurements soon.

I showcased this stuff to one of the top web programmer guys. I don't know if he was wow'ed but he seems excited to start playing with it. I also got him to install the Java 1.5 SDK on his windows machine. He needs it so that he can use jconsole to monitor messaging in AMQ. Jconsole is bundled with the SDK, and is used to access JMX enabled apps. JMX stands for Java Management Extensions. It's pretty nice to see exactly whats going on with your messaging service.

I'm pretty excited. This should make working with the web team much easier. In the past my department has used flat files and web services to communicate. This will add an extra way for our processes to communicate.

Sunday, October 08, 2006

And the award goes to!

WARNING: extreme rant dead ahead!

I'm about to be totally slammed. By slammed I mean that I will have no social life and spend all my time working on a huge project. My predicament stems from poor planning from the top down. My company has decided to build a brand new warehouse, running it on completely new software, for our purposes we will call if 'LowSquat'. Oh yeah and they also have brought in a completely new piece of software to run all the business functions such as: customer support, order entry, and product data management. Let's call this software 'Magic'!

Magic is not written yet. It's a piece of software which you build 'vertically' to suit your company. Magic's manufacturer licenses business partners to build these 'vertical solutions' (tailored software for client companies). Magic has some great features. It allows us developers to make changes, or create whole new applications that run in it's environment. Thats great and all, but Magic's out of the box the software doesn't do what we need it to do. It comes with a bunch of basic, 'vanilla' applications (like customer service, order entry, and financials). It will be our developers jobs to extend these apps so that they meet our business needs. Fun... So we need to work with this partner to develop Magic's apps/functions, then integrate with the warehouse software (LowSquat), inventory planning software etc. We do this and make sure that this happens right on time (a date that exists only based on construction of the warehouse!).

I wouldn't be so bent out of shape if I trusted the management at my work. Unfortunately they have shown that they do not want to extend themselves. So far I have seen a lot of half measures. Small changes in the way we do things that make little benefit, or that are all talk. One huge obstacle in the project moving forward is that they expect us to already know how the vanilla apps work. Our management's expectations (which do not exist on paper) are going to kill this project. No one knows exactly how all the vanilla apps work. We also do not have the test data set up to properly to 'play' with Magic's vanilla apps. So how do we find out how the unmodified Magic software works? It would be obvious to any project manager that to gather requirements for modifying Magic, you would first need to be very comfortable (dare I say an expert) with Magic.

The software also will be different when we go live. We are using Magic 2.5, but when we go live we will be using a whole new version 3.1!!! 3.1 is completely different in some respects. So why is that a problem? We won't have 3.1 until a month before the projects due for completion. Yikes.

A little background on the project so far. We first worked on a small 'sample' phase of the project. It was one piece of many, and it took damn near 5 months (for me) to complete. The planning for that phase did nothing but cause problems. The milestones/deliverables had dates set that did not rely on any useful information. No one in management bothered to write a full project plan that detailed each feature, how it should work, how all features integrate, how to test each and know it works etcettera. The project was actually just set up by someone with *some* technical experience, so that with dates combined we would hit a certain launch date. This launch date came from above I think. It was bad...one day during a meeting I was given the timetable to complete tasks assigned to me. Laughably, we spent hours going over this completely groundless plan. I could have used that time to do some real work. I don't know how the management felt about the meetings, but they left me frustrated, and I wonder what they got from them.

Another big miss on the 'sample' phase was not communicating across departments. People in departments which should have been involved never knew about our project. We where unaware of how our our project would affect other people's jobs. What happened was I was knee deep in development and finally the word got out what was going to happen. People were unhappy. Last minute, when the project should have been sailing to completion, a bunch of new 'critical' requirements came up. Since time was so short we had to hack together whatever we could. There was no way that we where going to postpone the project launch.

Another real problem I have is that no contact list has been provided. We should have a list of all 'power users' or 'insanely knowledgeable' contacts on each facet of our business. That way I don't have to chase around to find the expert. This allows me to quickly contact a person and get the info I need. We should also have proper training, and facilities to get the training we need. I should not have to wait for 1/2 a day or more to get the email describing what I need.

I guess the above comes to this point: if you don't bother to do a good job, don't expect me to do it for you! By good job I don't mean that you work real hard. I mean you work smart. You know what it is you are doing, and why you are doing it, and why it is a benefit. You communicate those things to other people (on paper!). You look for solutions to problems, problems that you confidently define (on paper!). You face those problems one by one, but all the while keep the big picture in view. Dates that are not created from thoughtful, thorough planning, but from need to hit a date are just a waste of everyones time.

I wish that the management had brought in more experienced people to manage the project. I seriously question the fact that we have total 'greenies' running the show. A project of this caliber requires massive planning. Even I know that and I'm just a low peon of IT. The planning has been lax. They basically drummed up a bunch of 'gaps' and said to me and mine "here you guys figure these out and implement the solution". Great f#*king project management! Now that they have gone through all that hard work to find what the gaps are, I can:

  • Become an expert on the current business functions (much of which I don't know)
  • Become an expert on Magic's business functions, look for gaps :(
  • Compare the 2.5 Magic app to the 3.1 Magic app, find gaps
  • In one month implement all the fixes to the vanilla 3.1 Magic release so that it can run our companies business functions
  • find out if proper functions exist in the new software
  • write business requirements
  • write design requirements
  • Create milestone/deliverables (running tested features?)
  • implement solutions (git' codin)
  • make contact with the users/customers
  • test the solutions
  • write test plans (maybe in my head only)
  • document proper processes
  • document my time and progress
  • Create and set my own goals, milestones and deliverables
  • Manage people on my team
  • More to come


Compound all this work with the following:

  • Create plans to train employees on new software
  • Train the employees on new software
  • Test the fully integrated solution
  • Test the warehouse on the new softwares
  • Collect info on the terrible bugs (don't sweat the small ones ;)
  • Fix the terrible bugs (probably goes in my list above)


There has got to be many things I'm missing here. When I look at all the things we have to do, and how much talent we have to do it, I just can't fathom things going well. At least I know I will do the absolute best I can :)

Tuesday, October 03, 2006

Maven 2 + Hibernate + HSQLDB

Maven make projects very portable. Hibernate makes working with persistent objects simple. HSQLDB is a Java database that you can run embeded in other programs.

A problem with using Hibernate is it can take away some of the portability of your project. This is because you will need to provide a database for making your objects persistent. In doing so your test config files will be spec'd to a single database. What happens when someone runs the project's tests and does not have access to the test database? The tests will fail :( We want to have unit tests be used for all our hibernate objects. It is especially important to thouroughly test them, as they will be a central part of your projects functioning properly (especially web projects!).

Luckily Maven makes it easy to use one database for testing, and another for production. This is where HSQLDB can come in handy. You can create a hibernate config just for testing which connects to an in memory HSQL database. You will need the proper files in src/test/resources folder of your Maven 2 project. The files include the test version of your hibernate config file, and also the test hibernate mapping files that accompany your code. I put the hibernate mapping files into folders that mirror the packaging of the POJO object classes. When you run 'mvn package' the .hbm files will be placed in the same folder as the POJO class they belong to. Here is an example of my Maven 2 project's structure:

/project
pom.xml
...other files
/src
/main
/java
/some
/package
Foo.java
/resources
hibernate.cfg.xml
/some
/package
Foo.hbm.xml
/test
/java
/some
/testpackage
TestFoo.java
/resources
hibernate.cfg.xml
/some
/package
Foo.hbm.xml


So you can see that there are a set of hibernate files for the build and the test phase. Now in the test hibernate.cfg.xml file you will need to have the following properties set:


<!-- Database connection settings -->
<!-- HSQL DB -->
<property name="connection.driver_class">org.hsqldb.jdbcDriver</property>
<property name="connection.url">jdbc:hsqldb:mem:aname</property>
<property name="connection.username">sa</property>
<property name="connection.password"></property>

and...

<property name="dialect">org.hibernate.dialect.HSQLDialect</property>
<mapping resource="some/package/Foo.hbm.xml" />


These entries should have you now set up to use a in memory database for all your testing with hibernate. But before this will work you need to do one more thing.

Set up you Maven 2 project so that HSQLDB is a dependency. Added this dependency section to your project:

<dependency>
<groupId>hsqldb</groupId>
<artifactId>hsqldb</artifactId>
<version>1.8.0.1</version>
</dependency>


Note that you will not need to make any specific JDBC driver available. The HSQLDB dependency contains the proper driver!

Using this pattern you will be able to make very portable Maven/Hibernate projects. Users any place in the world will be able to check out your project and test it immediately. If they would like to test using their own database it is not hard for them to overwrite what is in the src/test/hibernate.cfg.xml file.