Yesterday at Devoxx, Matt Raible did a very interesting talk on comparing JVM web frameworks. On this occasion he had the incredible courage of voicing his opinion on each of the most well-known frameworks, rating them in a matrix and the craziest part: showing this matrix to everyone.
Immediately after his talk, Twitter was on fire with advocates of each of those frameworks complaining about how those ratings were unfair and biased. I mean of course, I can hardly talk about 3 or 4 of these frameworks with the same level of confidence, but the guy has enough experience to have played with at least 13 of them, and it’s perfectly normal to expect him to have up-to-date and accurate feedback about all of them.
Anyway, his talk was highly entertaining but in the end it inspired me 2 reflections:
- His list of 20 criteria is excellent and covers pretty much everything, except maybe for “graphics design integration” which I think is very important and some frameworks make it much easier than other, like Flex with Flash Catalyst for example. So even if you don’t agree with the ratings, you can still reuse his methodology, build 13 proof of concepts and rate them yourself.
- Rather than complaining, let’s do a survey.
So right after the talk, I built a small Google docs survey, and to this date I got 26 answers, which I think is far less than the number of complaints, but is already a good start. Here are the first results based on those 26 responses. As you can see, there was a small issue with Google Docs who didn’t correctly save the results for the last criterion, degree of risk. So the rankings are based on only 19 criteria so far. But if you still want to voice your opinion, you can still do it. I will update the results from time to time and this time, the 20th criterion should be considered.
Thanks again Matt Raible for this very inspiring talk, and thanks to all the Devoxx team for yet another memorable edition.
By the way, as far as I’m concerned, I’m pretty happy with Grails and Flex at the moment, but after the amazing demos I saw at Devoxx, I will probably have a deeper look at Vaadin very soon. And please Jetbrains, we need more visual designers (Flex? Vaadin?).
8 responses to “JVM Web Framework Survey, First Results”
[…] This post was mentioned on Twitter by Sebastien Arbogast, Valentin Jacquemin. Valentin Jacquemin said: RT @sarbogast: JVM Web Framework Survey, First Results http://lnkd.in/UsCief […]
Hello Sebastien,
I was at his talk too and it was one of the best I saw @Devoxx 2010. (2 days for conference)
It was really interesting for my situation and concrete.
OK, everyone may not agree with the result, everyone will defend his ‘preferred’ framework (what it seems not the case in the presentation), but for me it was a good starting point and could help in this large choice of technology.
Hope your survey will ‘enlarge’ the vision …
It would be nice to know how you put your numbers together… they seem pretty random.
I just took the average of each grade for each criterion and each framework that people had graded other than “I don’t know”. So some frameworks that are not known by a lot of people have a grade that is less reliable but I didn’t want to mess up with ponderations and so on.
ahhahaa, now there’s a Matt’s rank and community rank :))
This comparison gives little to no information… When it comes to decision making what would make the difference are the real world cutting-edge show cases of successfully build industry-strength software.
These numbers are subjective and too abstract – the list of 20 can easily be reduced to 3:
– time
– money
– quality
Give the same requirements to 10 teams, each highly-specialized in one particular framework and measure these 3 – time needed to hit the deadline, money invested in the process and number of defects at the end.
Then put these teams in a lengthy evolution and after a year check which of the teams has the top features implemented and with how many new/old defects.
This is the ultimate test – extremely demanding requirements and clean comprehensible metrics.
Nevertheless, this stuff is very thought provoking – at the end this is an extremely important strategic decision. It’s interesting wether someone really tried to approach this from more scientific point of view?
I wish I had seen the talk…
Vladimir Tsvetkov wrote: “It’s interesting weather someone really tried to approach this from more scientific point of view?”
Of course there should be a scientific approach to the comparison of Java Web Frameworks.
… And as any scientific approach that should better end with some numbers. Those from the matrix indeed give little. Giving “the same requirements to 10 teams …” would be a good idea, if not so expensive. Besides this approach – when each team is “highly-specialized in one particular framework” does not cover the learning curve. Probably taking 100 randomly chosen teams would be better. That is indeed a scientific approach, but out of pure experimental science.
Speaking more seriously we may remember something (alas, mostly forgotten) from early days of science of programming, specifically the things about complexity and the relative levels of programming languages. Things like that are indeed measurable.
Applied to Web Frameworks we may think about some imaginable average Web application with typical functionality as a working entity and then go back in time and estimate the size of same application from the point of view of developer’s effort. The developers creatively write what they think about the product developed using certain language, or rather a set of languages defined by technology (aka framework). That includes programming language of course (and you are not going to use Cobol, right?) and other things like configurations, HTML used in various ways and so on. The level of this combined language is high if the given framework permits to code the project with lesser number of operators.
As said experimental parallel programming of test project is not realistic (though the history knows some examples), but same as for comparison languages+compilers some artificial but sophisticated test sets may be developed with time. Programming such a test might take maximum a few days from any team “highly-specialized in one particular framework”.
As an example and possibly as a staring point I may suggest a set of tests I have developed to test my own framework – see http://www.hybridjava.com:8080/HJ_Sample/