In the previous post in this series, I tried to enumerate the most frequent kinds of applications. The question I’m going to ask myself here is what are the constraints that intervene in choosing the right paradigm and the correponding technologies to implement it.
Environment! Environment! Environment!
Before we start answering that question, let’s just be clear with something. We live in a world where there are plenty of free and Open Source libraries and frameworks and tools of all kinds. It doesn’t mean that free is always good, but at least it’s an option, and if you have a commercial option that can add some value somewhere, then go for it, it’ll be worth it. So I won’t consider tooling cost as a parameter here.
Performance (high computational power and low bandwidth)
Whenever you hear your customer say “I need it to handle several million transactions per second”or “I quickly want to make decisions based on thousands and thousands of records”, you know that you will have to think about performance. There can be several kinds of performance: memory consumption, CPU cycles, disk space, network bandwidth, hardware cost, etc. And all those metrics very often play together, which means that any change to one of these metrics has an impact on all of the others. For instance, it’s very common that you have to increase memory consumption to optimize CPU or disk access (caching).
Another important characteristics of performance is that optimization requires you to dig deeper into low-level details, because most of the performance is lost when abstracting machine constructs to be closer to human users. That’s why optimizing performance requires more work than doing things naively, and it’s very important to weigh the benefits of this work compared to the cost.
Moreover, it’s sometimes tempting to think of performance very early on and to focus on that more than the business value the application is supposed to create. But experience proves that you can quickly end up with very fast systems that don’t do what they are supposed to do because the closer you are to the machine, the harder it is to develop on it or maintain it. That’s why Donald Knuth said:
We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.
Distributivity (number and spreading of end-users)
Nowadays, it seems like all applications are meant to be web applications, all the more so with the recent fashion for cloud-based applications that attempt to “webify” traditionally desktop-based apps like word processors, worksheets, and so on. And there has been so much effort spent in web apps in the past 10 years or so, that everyone knows about the technologies to build them. Yet it’s always important to ask yourself a few questions: will the application be accessible to the general public? Will it be extranet or intranet? How many users are likely to access the application at the same time? Are potential end users ALWAYS online? What would be the impact of the browser crashing in the middle of a session?
Sometimes, having to think about data access concurrency, network bandwidth or security is a useless hurdle that you can avoid just by developing a desktop application.
Automation (launch it regularly and in the background)
What if your application doesn’t need a complex user interface but requires just a few parameters to do its job? What if, on the other hand, it needs to be easily automated and integrated into a batch processing system? When you face such a business context, it’s important to consider the option of a CLI app, because then it can also be easier to integrate with other kernel system apps through scripting.
Whenever you hear your customer say words like “data analysis”, “system check” or “automatic synchronization”, you’d better think twice about your web app idea.
Ergonomics (easy and quick data input and visualization)
At the other hand of the data analysis pipeline, there is data input. And the more data there is to input, the higher the risk of rejection of the application by end users. And since end users generally wait a long time to get theirs hands on the application, this rejection traditionally happens very late in the development process. Combine that with the fact that people who ask for the application are not the ones using it, and the very special mindset of developers and you have all the chance in the world to miss your target and have the project fail before it reaches the finish line.
Of course, technology is not the primary solution to this problem. The first thing is to consider end users, consult them, talk with them, even if the business owner doesn’t think it’s useful. Then of course, methodology goes a long in putting the application into end users hands as soon and often as possible. But as soon as you realize the specificity of what users are expecting, you understand that you need a technology that gives you all the freedom to implement very complex use cases, without forgetting about the conventions and paradigms that people are used to.
Integration (with operating system and external systems)
Web apps have another big drawback in addition to ergonomic limitations, which is desktop integration. This issue comes from the security model of the web. Because it’s so easy to access a web application, because you don’t have anything to install and because the application is directly connected, it also creates a huge opportunity for malicious use. Which is why web apps usually work in what is called a sandbox: network access is limited to the originating domain (unless specified otherwise), no direct access to the file system is allowed, no native API access to things like system tray icons, drag and drop and so on.
And if your application has tom import or export very big files, or notify the user on a regular basis, those limitations can be a killer. There are some technologies now that create some sort of a bridge between a runtime plugin in your browser and a runtime app on your machine, but portability of this bridge across systems and across browsers is sometimes limited.
Productivity (getting things done and adapting fast)
How stable are the business rules you’re asking me to implement? How sure do you know what you expect from this application? If your customer answers “not very” to any of these questions, you might think twice about using this low-level highly-optimized programming language. Because if it takes you weeks to implement any change or new feature, your application might quickly end very far away from the business value is was supposed to create.
Fortunately, with the maturity of web application development, there has been a lot of very interesting developments in the area of development productivity lately. Development tools like integrated development environments certainly go a long way in making developers more productive, but when this concern is dealt with at the programming language level, it’s even better.
Maintenance cost (number and quality of resources)
Whatever technologies you plan to use, you definitely must consider the constraint of resources. There are so many techniques out there that it’s impossible for everyone to know all of them. Some of those technologies are very mature and popular, thus making it easier to find people to maintain and evolve your application on the long term. But the more mature the less innovative they often are. So finding the ideal compromise between the benefits of innovation versus the cost of resources to maintain your application is very important. Thus is might require some insight and technology watch in order to anticipate which of these innovative techniques will grow fast and be there for a long time.
And if you really need one of these innovative technologies that is not very popular yet, then don’t forget to include training costs in your plan. Last but not least, don’t forget to consider company-wide policies: IT architecture departments can create substantial impediments on your way, which might lead you to weigh in the cost of those impediments.
Continuity (robustness and evolutivity)
Beyond people able to maintain it, there is another thing that is very important for the longevity of your application: the intrinsic software quality assets of the technologies that you use. Testability, decoupling, Domain Specific Language support, portability, internationalization support, integration capabilities with other technologies and platforms, extensibility, modularity. All those characteristics can be very important to consider if your application is supposed to stay there for more than 5 years and evolve with the business at hand.
A lot of money is spent and sometimes wasted in reegineering entire applications just to keep up with current technologies or new business constraints, so much so that choosing robust and evolutive techniques can greatly reduce the long term ownership cost of the application.
In the final issue, I’ll risk myself into making some predictions about the technologies that seem very important in order to implement applications with that kind of constraints. But before I do that, do you see other business constraints that might be important to consider before choosing the best tools for the job?
5 responses to “Software Architecture Cheatsheet (Part 2/3)”
Under the subjects of Maintenance and Continuity, you also have to consider more than just the application that you are building. You have to consider the supporting structure that surrounds your application, e.g. the tools that are used to actually create and build your application.
Longevity means more than just being able to bring new people on and get them to understand the code. Your code, your tools, and your process have to survive people leaving the team, and people being added to the team. If your code is well documented, but your surrounding infrastructure is only in one persons head, you’ll still have longevity, maintenance, and continuity problems.
I dont understand why you think that web-apps are less secure. i think they are more secure, because nobody has access to the source code and you can crack it.
in desktop app i can interupt what is send to server(its pure sql being send to db) and with web-app i dont even know the address of db server.
Most of the security problems in web-app are also in desktop app(except XSS)
[…] Software Architecture Cheatsheet (Part 2/3) | Software Artist […]
Hmmm… let’s see… Cross-site scripting, SQL injection, brute force password cracking, denial of service, all things that simply don’t exist when your application is only accesible through a desktop app.
Now what you say about a desktop app is another problem. It’s a matter of design: IMHO, you should NEVER EVER transfer pure SQL over the wire or know the address of the DB on the client side. But this could be the topic of a next post.
Finally, believe it or not, there are still applications that simply don’t require any distributivity, no DB, no server-side. In those cases, proposing a web-app architecture is a little like commuting with a tank, don’t you think?
[…] the previous post, I tried to think of the business constraints that intervene in the choices of a software […]