Skip to main content

Sequence diagrams, the only good thing UML brought to software development

·12 mins

Sequence diagrams really shine when you’re documenting different parts of a system and the various ways these parts interact with each other.

A sequence diagram describes the operations within a system and maps what and when messages are sent.

In its simplest form, a sequence diagram could model the messages and flow between a user and their bank as they log in to the banking app. In more complex forms, sequence diagrams could include alternatives, options, and loops to model conditional and divergent flows if, say, a login process also includes security, verification, and other user actions.

If you haven’t used them extensively, you’ve likely heard of them in one of two contexts: One, in isolation, as a useful type of diagram to consider when writing documentation, or two, as an artifact of the now rarely used Unified Modelling Language (UML) from the late 1990s.

In this article, I’m going to briefly dig into the history of UML so we can see how and why sequence diagrams survived despite most of UML getting consigned to the dustbin of software history. Then, I’ll show why sequence diagrams are still valuable and how you can make the best use of them.

My interest comes from two sources: I think sequence diagrams are underrated and underused, and I think sequence diagrams are an ideal use case for MermaidChart because it allows users to choose informal simplicity over the rigid complexity that results from using older tools like IBM’s Rational Rose.

The rise and fall of UML #

UML originally came out in 1997 and as Martin Fowler wrote in the preceding years, the primary purpose was to “eliminate the bedlam that had overtaken graphical modeling languages in the object-oriented world.” The basic problem – one that has been repeated many times throughout the history of software – is that there was chaos and confusion and a real desire to introduce a standard that would provide clarity.

Rational Software started developing UML in 1994; the Object Management Group (OMG) accepted it as a standard in 1997; and the International Organization for Standardization (ISO) adopted it as an accepted standard in 2005.

The rise of UML saw excitement and criticism even as it became a standard (at least on paper). Many loved it but many either had problems with UML itself or with how people were using it. Death by UML Fever, a 2004 piece from a Boeing software architect, captures some of the complaints.

Here, the author writes that “No other technology has so quickly and deeply permeated the software-engineering life cycle quite like UML” and argues that UML had become a vehicle for people without software experience to design and control the software development process.

In the years since, a few obituaries have been written, including a 2018 piece from Ivar Jacobson (who was VP at Rational Software when UML was becoming a standard), a 2021 piece from Hillel Wayne (who interviewed some of the major people from those days, such as Grady Booch and Bertrand Meyer), and a 2022 piece from Laurence Tratt (who was directly involved in UML’s standardization).

These pieces are all worth reading but they all settle on an essentially similar explanation: UML got too complex (the UML 2.2 documentation, for example, was over 1,000 pages long and UML became associated with burdensome and often wasteful pre-work.

UML’s life beyond death #

We’re talking, at this point, about a methodology that’s almost 30 years old – a methodology that its proponents and its detractors agree is essentially dead.

And yet, as a 2014 study showed – contrary to the researchers’ own expectations – most of the developers and software architects they served were creating sketches and diagrams that “contained at least some UML elements.” The researchers noted that “most of them were informal,” but still, this is a surprisingly powerful life beyond death.

Occasionally, the topic comes up directly and we can see how people talk about modern-day UML. In this HackerNews thread, for example, a user asks whether learning UML is worth their time, and while most users agree UML itself is useless, many also suggest learning a few UML techniques (sequence diagrams chief among them).

One user even wrote that “The reward of the clarity of sequence diagrams is worth the pain and boredom of learning all the others at university,” which is a pretty strong recommendation if I’ve ever seen one. An even stronger recommendation comes from Mark Watson, who co-wrote an entire book about UML but now says that “Sequence diagrams are the only type of diagrams I use anymore.”

We can trace the survivability of sequence diagrams back to UML’s origins. In the heyday of UML, Martin Fowler identified three use cases for UML: sketching, blueprinting, and programming.

The programming use case died because, according to Hillell Wayne, “even most proponents of UML considered it a terrible idea.” The blueprinting use case actually appeared to be the strongest one but died out too because the use case had been tied to Rational Software and to CASE tools – both of which died and took UML with them.

The first use case – sketching – survived but it “drifted into multiple, mutually unintelligible dialects,” according to Wayne. Tratt agrees, writing that with hindsight, UML had reached its peak in 2000 “as a medium for software sketching.”

A medium for sketching is a much humbler vision than what UML proponents had imagined but we shouldn’t let that undervalue what remains – especially when it comes to sequence diagrams.

Sequence diagrams out of the ashes #

Sequence diagrams survived not just because they were the best of a bad bunch but because they’re genuinely useful. As I wrote above, the gist of sequence diagrams is that you can use them to easily map and visualize the dynamic flow of messages across a system.

Message flows can get really complex but sequence diagrams provide two main components that create the backbone of the diagram:

  • Lifelines, which represent objects and the processes between them.
  • Messages, which represent the information exchanged over time.

The base components have to be simple because a sequence diagram is meant to represent a system in action, meaning the represented components will be running simultaneously, in order, and in parallel. A good sequence diagram shows the flow, the messages exchanged between objects, and the function performed before the lifeline “dies.”

The primary use cases for sequence diagrams are:

  • Sketching and designing how a system is supposed to work before building it.
  • Documenting the requirements of a new system.
  • Breaking down and understanding an existing (often legacy) system.

A sequence diagram can’t (and shouldn’t) capture an entire system so in these use cases, the best methods involve using them to visualize how a system is used, diagram the logical flow of a particular process, or map out the functionality of a service.

Sequence diagrams really shine when you’re documenting different parts of a system and the various ways these parts interact with each other. Sequence diagrams don’t work as well when you’re trying to, for example, model an algorithm in a specific system. If you get too granular and too detailed, sequence diagrams become too much trouble than they’re worth. But when you use them to map out different “black boxes” and show how they talk to each other, they can be really helpful.

But like other diagrams, sequence diagrams succeed in proportion to how well you make them. Their quality, however, isn’t dependent on sheer effort but requires a careful, thoughtful approach based on starting from the core process and working outward toward the edge cases.

Design sequence diagrams from the inside out #

There are many reasons why you might want to make a sequence diagram, but no matter the inspiration, the best way to make a sequence diagram and solve the original problem is to start from the inside and work your way out and through.

Start with the happy path and work to the edge cases #

When you sit down to create a sequence diagram, might be tempting to begin with the edge cases because the edge cases are often the most complex and the most in need of clarification.

Often, it’s the possibility of an edge case (if you’re creating a sequence diagram to support architecture design) or an already present, already troublesome edge case (if you’re creating a sequence diagram to better understand legacy software) that inspires the creation of a sequence design. But even if your primary goal is to clarify those edge cases, you’ll create a better sequence diagram if you start from the happy path first.

When you start, identify the happy path – the ideal way messages flow from beginning to end. Once you diagram this core sequence, you can work outward to other routes and more infrequent message flows.

For example, using the banking application login example, it’s best to start with the happy path – customers requesting access and the bank granting that access. Starting from this core flow ensures that as you think through and document divergent flows and edge cases, the happy path remains your anchor.

An example of a simple sequence diagram

From there, you can layer in more complexity to the happy path. In the below example, we’ve added a few more components, including an authentication service and a database, but the core happy path remains central.

An example of a sequence diagram with more details

Starting with the happy path provides clarity, ensuring that when you shift to the edge cases, you know how the sequence is supposed to run and know why a user might be encountering an edge case. Building up and out from the happy path is also the best way to avoid overcomplicating the final diagram.

Comprehensibility > Comprehensiveness #

The most common failure mode for sequence diagrams is over-complication. (This also is the failure mode for most diagrams, as I wrote in an article on flow charts).

One of the best people to refer to here is Martin Fowler, who wrote (almost twenty years ago) that the primary value of drawing diagrams is communication. “Effective communication,” Fowler writes, “means selecting important things and neglecting the less important.”

The neglect is the tough part. Because the purpose of diagramming is communication, it’s essential to strip away some information so as to clarify other information. Fowler reminds us that “the code is the best source of comprehensive information,” so diagrams – by nature – shouldn’t be comprehensive (that’s what the code is for). Fowler puts it well, writing that “comprehensiveness is the enemy of comprehensibility.”

You can see this well in the sequence diagram below, which a developer cited in a PR to request that the team “consider abstracting away less important information from the diagram so that the reading developer can focus on the important ideas.”

An example of a sequence diagram with too much details

In his article on the death of UML, Thomas Beale writes that the main reason UML became overly complex is that the creators tried to “define a single meta-model” that could provide all the elements necessary for over a dozen diagram types. Beale argues that “Each type of diagram in fact represents a specific conceptual space, which needs its own specific model.”

UML itself died, in part, because it added complexity instead of providing clarity. This is useful to remember today because – just as UML died – so will any given sequence diagram fail if it gets overly complex.

Big picture > Details #

If the former problem is a result of being too comprehensive and too broad, the next problem is a result of being too detailed and too narrow.

In Alex Bell’s article on UML Fever, one of the many “strains” he describes is “Gnat’s eyebrow fever” and it’s one of the most likely problems to afflict your sequence diagrams. He describes this fever as the “very strong desire to create UML diagrams that are extremely detailed” and argues it results from the belief that diagramming granular details “increases the probability that the resulting code will be more correct.”

Implementation is where the rubber meets the road, however, so if you’re building a sequence diagram so as to better inform design requirements, there’s a point in the process where it’s more efficient to stop diagraming and start coding.

That said, this principle extends beyond that use case. If you’re building a sequence diagram to communicate a process in your documentation, for example, visualizing the big picture will be more useful for readers than digging deep into the details. It’s not that the details are unimportant but that too many details will impair the ability to see the big picture sequence (which is the primary goal of sequence diagrams).

The same principle applies to analyzing and documenting legacy code – the detail is in the code itself so the sequence diagram will only be useful if you use it to visualize the big picture.

Embrace an architectural mindset with sequence diagrams #

The point of this article isn’t to look at sequence diagrams out of sheer historical curiosity. Sequence diagrams are not only an artifact of UML but an artifact of a software design mindset that emphasized rigorous designing and planning.

Fowler explains that the association between diagrams and “heavyweight” processes is a result of diagramming poorly – not a result of diagramming itself. The advice throughout this article is meant to help you create better sequence diagrams but in the process, hopefully, help you better see the possibilities that result from having diagramming skills in your design and documentation arsenal.

The best work comes from cycling between designing and coding – creating an upfront design, coding based on the design, and feeding what you learned from the coding work back into the design. If a diagram helps you understand a sequence, it’s “perfectly reasonable,” as Fowler writes, to throw it away after. (“Throwing it away,” however, doesn’t necessarily mean deleting it forever; it’s often helpful to put it aside so that you can return to it later if, for example, you want to think through previous work).

“The point,” which Jacobson emphasizes in his article about the death of UML, “is for every sprint to lead with architectural thinking.” With sequence diagrams, in particular, you can better understand the processes at hand – making it easier to build or improve their components.