Alina Stürck
December 16, 2024 | 5 minutes to read

Shaping Together Architectural Work

How DB Systel achieved more clarity, participation and quality with LASR

About image

DB Systel: Digitalization for the Entire Group

DB Systel is the digitalization partner of Deutsche Bahn. With approximately 7,000 employees, the company supports the entire group in its digital transformation journey. The work is carried out in an open network model with self-organized teams that are responsible for developing and operating digital products — in an agile, technology-driven, and customer-centric manner.

One of these teams is developing a web-based information system that will be used in DB Cargo’s European freight transport operations. The system consolidates data from various sources, assists dispatchers in planning and managing transports, and plays a crucial role in ensuring operational transparency within a highly complex environment. This case study explores the team’s experience with LASR, from the initial contact to the retrospective evaluation of the first application.

Starting Point

High Responsibility - Limited Shared Understanding of Architecture

We are a team of ten people, working on a system designed to assist dispatchers in efficiently managing and monitoring transports. It’s a data-intensive web application built on Java, Spring Boot, and Angular, featuring numerous list views, real-time data, and integrations with various source systems.

After over three years of development, our system had expanded significantly: six services, numerous interfaces, and ongoing functional enhancements. As a result, certain pockets of expertise emerged, with only a few colleagues having a comprehensive understanding of the overall system and its architectural framework. While the team is technically proficient, communication has been somewhat reserved. This was compounded by several years of remote work, which made informal conversations and spontaneous architecture discussions more challenging. Within the team, feedback on the software’s quality became increasingly critical, though often vague.

We were therefore looking for a method that would allow us to address both quality and architectural concerns collaboratively while actively involving all team members. LASR – the Lightweight Approach for Software Reviews – appeared to be the structured yet accessible format we needed to bring architectural topics into the team’s discussions.

LASR Fact Sheet

   Participants: 8 persons

   Organizational Level: Team

   Time Spent:

  • Understand what makes you special (Steps 1-2): 60min
  • Explore your Architecture (Steps 3-4): 90min + 90min
  • Extra Times: 30min TODOs and next steps

   Results:

  • Identified Risks: 8
  • Biggest Gaps: Maintainability, Functional Suitability

About image

Our LASR Experience

We aimed to bring as many participants as possible together in person, and our Product Owner was also part of the group. We focused solely on the core elements of LASR, without delving into the deeper aspects with LASR+, and allocated a total of 5 hours for the session.

Step 1 - Lean Mission Statement

We didn’t need to completely reinvent the mission statement, as there was already a foundation in place. We refined the existing text, adding more detail and making slight enhancements. The first sentence became: “Monitor transports from start to finish at a glance, with intelligently combined schedules for both domestic and international destinations.”

Step 2 - Evaluation Criteria

To identify the top 5 quality goals, we made slight adjustments to the Top-5 Challenger. After laying out the random set of five target cards, the remaining candidate cards were distributed among the group. To initiate discussion, each person was asked to decide whether their assigned card(s) should be traded in or set aside. The goal was to ensure that everyone was actively involved from the outset.

The five quality goals we determined for our evaluation criteria were usability, reliability, operability, functional suitability, and maintainability. We also wrote a few key points on post-its to document the reasons behind our selection of these criteria. The spider web diagram illustrates these goals alongside their relevant target values.

About image

Step 3 - Risk-based Review

To reduce the number of standard risk cards and involve as many people as possible, we assigned each of the 8 categories to a team member and gave them the 4 associated risk cards. The “owner“ of each category then decided how many cards should be included in the review. In some categories, all 4 risk cards made it into the final deck, while in others, only 2 were selected. The deck was shuffled and randomly distributed to all participants.

The rest of the risk-based review followed the standard procedure. Below is a photo from the workshop. A quick tip: we had a table that was slightly too small—the minimum size should be 1x1m, and a little extra space for writing post-its would certainly be helpful.

We set timeboxes of 3-5 minutes for classifying each risk (based on impact and likelihood) and also took notes during the process. The resulting diagram can be seen in the factsheet, where we identified significant gaps in maintainability and functional suitability.

About image

Step 4 - Quality-focused Analysis

In this step, we compiled the strengths and weaknesses in a table. We assigned the relevant quality attributes to each and identified the software components affected. Afterward, we prioritized the weaknesses.

What Has Come of It?

Afterward, we analyzed the strengths and weaknesses in more detail and recorded them in our JIRA system. We now have regular “quality sprints“ during which we address these findings — we’ve already done this twice.

We are still working with the list of strengths and weaknesses that we initially developed during the LASR workshop.

Insights and Tips

Rhythm: After all, software is a living product. We plan to hold a LASR workshop once a year moving forward, slightly shortened each time, as everyone is already familiar with the method.

Playing Cards: The cards worked really well. Even participants who are typically reserved were motivated to contribute by the vibrant illustrations and the tactile experience. The Top-5 Challenger game mechanism also engaged everyone effectively and sparked new ideas. It was the perfect fit for our starting point.

Preparation Times: We allocated about 10 minutes of preparation time or buffer for each step. During this time, we introduced the steps and extended the timeboxes when necessary. It was extremely valuable for us to take the time to thoroughly discuss important findings.

Result: Although the result after our half-day workshop was somewhat preliminary, the key risk topics were identified, allowing us to work on the details during our quality sprints. LASR provided us with plenty of food for thought!

"The key risk topics were identified, allowing us to work on the details during our quality sprints."

Alina Stürck
Alina Stürck

Business Engineer