Sunday 23 September 2018

Driving Quality using Executable MBSE: An IBM Rhapsody Enlightenment webinar recording

In August I did a webinar as part of the IBM Rational Rhapsody enlightenment series that covers the slightly more advanced topic of executable MBSE.

The following is IBM's link to the recording (you need to give them your email address but it plays straight away):
https://attendee.gotowebinar.com/recording/6132834359953946881

The webinar was mainly demo. It focuses on using executable MBSE to build test case sequence diagrams and plans from use cases using a method I'm calling called ROCKETs (Requirements Originating in the Context of Knowledge from Executing Transitions on Statecharts). It's a method that works really well with Rational DOORS.

This shows a prototype of my v3 SysMLHelper, an open source plug-in and profile that intends to make the job simple. It's a much more advanced topic than the first video but I've tried to make it fun and interesting. I start with a blank model and transition through the process of making an executable model from nothing, finishing with use of an Add On to Rhapsody called "Test Conductor" that enables suites of tests to be built and executed.

Of course, this is possible without automation only I wouldn't be able to do in in 20 mins! It would take me weeks to build the model and explain how to build it. The automation I show is something I've been working on and using for over 3 years now. There is over 10K of Java and some of my best ideas for making working with Executable Model-based Systems Engineering (MBSE) and IBM Rhapsody Designer fun and relevant ;-)

Note: My next public training course is w/b 23rd Oct 2018 at HORIBA-MIRA (in concert with the Functional Safety Team).

Wednesday 11 July 2018

Why are textual activity diagrams good for creating requirements from use cases?

One successful approach is to use one activity diagram per use case to document the steps of the use case, together with trigger, pre-conditions and post-conditions.





















The benefits of the activity diagram are:
  1. Their flow-chart like syntax can be read with little or no training. This makes them the most easily consumable behavioural diagrams in UML and SysML and hence a good place to start engagement with stakeholders (caveat: stick to using control flow semantics). 
  2. Alternate flows of the use case can be expanded on the same canvas (either using interruptible regions or decision nodes) giving the diagram a power of analysis that is difficult with textual step-based approaches. “If you don’t actively attack the risks, the risks will actively attack you.” Tom Gilb, 1988










Importantly, using a model, until a Word document we can easily capture traceability between use case steps and textual requirements using SysML relations (satisfy, refine, and other dependencies). From the activity diagram we might create textual requirements. The activity diagram is the canvas in which we reconcile textual system requirements with the steps of a use case. They provide complementary views of the system from an external perspective.

















Traceability in-situ has significant benefits, including:
  1. Increased ability to cope with requirements churn. When developing new ideas requirements churn cannot be ignored. When performing requirements definition work churn is both a good thing (we honed down the ideas to distil then into the essence of need) and a bad thing (if we can't cope with changes). Capturing traceability in-situ as part of the task makes it virtually effortless, whereas waiting until after the event makes it virtually impossible.
  2. Increased audit-ability and assurance of due diligence. This is necessary for safety-related processes such as ISO 26262 or for general process compliance to standards such as A-SPICE.
  3. Improved reviews. Requirements can be written so they are SMART (Specific Measurable Achievable, etc) but we have a picture of the story of a use case to put and read them in context, making the requirements review easier and more pleasurable (focusing not on the English grammar but more on the user experience).
In A-SPICE this relates to:

SYS.1.BP1: Obtain stakeholder requirements and requests. Obtain and define stakeholder requirements and requests through direct solicitation of customer input and through review of customer business proposals (where relevant), target operating and hardware environment,
and other documents bearing on customer requirements. [OUTCOME 1, 4]

SYS.1.BP2: Understand stakeholder expectations. Ensure that both supplier and customer understand each requirement in the same way. [OUTCOME 2]

NOTE 4: Reviewing the requirements and requests with the customer supports
a better understanding of customer needs and expectations. 

Traceability relates to:

SYS.1.BP5: Manage stakeholder requirements changes. Manage all changes made to the stakeholder requirements against the stakeholder requirements baseline to ensure enhancements resulting from changing technology and stakeholder needs are identified and that those who are affected by the changes are able to assess the impact and risks and initiate appropriate change control and mitigation actions. [OUTCOME 3, 6] NOTE 5: Requirements change may arise from different sources as for instance changing technology and stakeholder needs, legal constraints. 

Thursday 28 June 2018

Why is the payload textual requirements?

Despite being an executable MBSE method, the ROCKET method takes a strong stance that the payload carried by ROCKETS will always be textual requirements. There are good reasons for this.

Firstly, textual requirements are a common currency (as a manager I know used to say a lot). Executable MBSE can be our money-making machine but - by producing and consuming textual requirements in models - we can spend the money in different shops (outside of the model). For example, we can put textual requirements into a Requirements Management (RM) tool like DOORS NG or DOORS 9, send them to suppliers in ReqIF format or using a exchange server, and we can  exchange them with test management tools like RQM. A focus on textual requirements as the formal hand-off eases adoption of MBSE techniques into an existing requirements-driven business or process and means that all those good things that come from a focus on well-formed and well written requirements still apply.

Secondly, SysML provides a very clear, mature and intuitive way of dovetailing textual requirements with models, well supported by tool integrations with RM tools. Textual requirements being neutral to modeling elements can be associated with different views of the same system, e.g., activity models, interaction views, state machine diagrams, use case models, static design, and interfaces. Requirements are omnipresent, in that they can be shown on any of the diagrams or meta-elements that make up the OMG SysML standard. You can use a host of SysML artefacts like tables, matrices, and diagrams to view and create traceability in a model to enable you to get the best of both worlds.

This shows a textual requirement tracing to an action and accept event action on a SysML/UML activity diagram:














This is the same behavior described in a SysML/UML sequence diagram:














This is the same requirement tracing to a transition on a SysML/UML state machine:














All of these examples of refinement at the same level of behavioural abstraction. Each transformation is done for a different purpose.

Importantly, since the same textual requirement can be associated with different modelling elements this enables something very subtle but inherently powerful to occur in the ROCKET method, which is the transformation of model data from one form to another. Each of these behavioural representations is representing the same requirements in a different way. Like stepping stones (moon hops) over a stream, they enable you to get to the other side without getting wet (which is never a good thing when it comes to being a systems engineer ;-). You can even make a choice about whether to use the stones again or throw them away. Use case steps can be transformed into operations and events and put into state-machines that can be executed to create test scenarios that show operations and events, that can be copied to Blocks that represent architectural components, which by virtue of the copy will trace to the same requirements, so that when you create a sub-system requirement the traceability to the higher-level requirement can be done in-situ.

Since the method is highly formalised, highly systematic and thus repeatable. By systematic, I mean that it involves lots of little steps (nope, not a giant leap for man/human kind ;-). The systematic nature of the model transformations means they can be supported by automation that speeds it up. Engineers can focus on engineering decisions about their problem, rather than how to build models or use the tool, and tasks of refining the requirements can be optimised into phases (don’t worry about executing a model until you’ve actually spoken to stakeholders to validate the needs and art of the possible). Hurrah! What's not to like?

The consistency of the systematic approach and the use of automation leads to a production line mentality. Models that come off the line will have common layouts and make consistent use of the SysML metamodel. This makes external audits a doddle and means that engineers working on one model can easily switch to a model created by different teams using the same method and tools, enabling you to move resources easily between projects. Systematic approaches can be documented and thus used for quality assurance purposes enabling external audit like 26262 and A-SPICE for automotive, or FDA approval (of medical devices).

Phew! (I'll take a breather, stay tuned for more...)

Monday 25 June 2018

Introducing ROCKETS: An executable MBSE process

What is the ROCKETS MBSE process?

I've decided to name my MBSE method ROCKETS. ROCKETS is a derivative of the the Harmony/SE method that uses executable MBSE with SysML and IBM Rational Rhapsody. It stands for Requirements Originating in the Context of Knowledge Executed as Transitions on Statecharts. It’s designed to be fun. To make it fun a couple of aspects are considered important:

  1. Automation, so that the user is focused on the creative aspects associated with solving the problem, rather than setting up the model.
  2. Systematic, the method splits the problem down. The adage is to take tiny steps rather than a giant leap.
  3. Team work. How people interact during the process is considered as important as the artefacts that are produced.

The approach is use case driven, and conforms to the original definition of a use case as:
“A description of a set of sequences of actions, including variants, that a system performs that yield an observable result of value to an actor” [Booch, Rumbaugh, Jacobson 1999] (where actors are entities outside of the system that gain value from or participate in the use case)

Importantly, use cases are stories about how the system is intended to be used. ROCKETS is therefore a method focused on concept of operations (CONOPS) modeling. CONOPS models describe clearly and concisely what the is to be accomplished and how it will be done using available resources. As such it is independent of aspects such as programming language and software design. We are trying to model who needs to do what, rather than how it is done.

What are the phases of the ROCKET process?

There are three phases to the ROCKET process:

  1. Payload definition.
  2. Launching the rocket(s).
  3. Assembling the space station.

The key challenge overcome from the ROCKETS process is the management of complexity; the division of responsibility across a team and between teams of engineers such that the task can be accomplished with productive use of resources. In practice these phases occur in parallel, as we can’t get the whole space station into a single rocket, we need to send up a series of rockets.

The space station analogy is a useful one because we can picture it as a series of components that are interconnected. Importantly, we might read the components as component owners. This is a systems engineering task to allocate work to different teams by deciding which team needs to do what.

Payload definition starts on the ground with the creation of functional system requirements that we intend to launch up to the space station. To get them to the space station we need to launch them. We do this with executable MBSE. Much like how we might hypothesis how a clock might work, executable MBSE involves building a simulation that we can interact with to test how the system may work. We then need to get the payload into space in a rocket where they will be assembled by a separate team.

What is the relationship to process standards?

ROCKETS process aligns directly with Sys.1 to Sys.5 of A-SPICE 3.

SYS.1 Requirements Elicitation
SYS.2 System Requirements Analysis
SYS.3 System Architectural Design
SYS.4 System Integration and Integration Test
SYS.5 System Qualification Test

Friday 22 June 2018

Gen #3 profile testing with TestConductor

I've concluded first round of enhancements focused on being able to move sequence diagrams involving interacting with the simulation into test cases in test conductor. I actually found more enhancements were needed than expected, hence a few changes from the Gen #2 profile were needed. Main thing was to make use of Interfaces on ports (rather than use rapid ports). This introduced an issue I hadn't considered around inheritance in framework. However, that's all solved with a few changes to the helper.

Thursday 24 May 2018

Video (8 mins) on Making MBSE with Rhapsody simple (SysMLHelper 3rd generation enhancements)

This video takes you through some of the enhancements I’ve been working on for the 3rd generation of the SysMLHelperProfile; a profile I originally developed to support my Rhapsody training, but which has evolved with my work with different companies rolling out Rhapsody for the first time.
The goal of the SysMLHelper profile is to make MBSE with Rhapsody simple.


















In this 3rd generation of the profile I’m looking to make it even smoother. Creating the initial Requirements Analysis package structure requires only a couple of clicks.












In the box here, it’s asking me to name the package. I’m going to call this FeatureA. Let’s imagine it represents a set of use cases that relate to a new feature for an existing system that we want to work on.



A use case diagram has been created programmatically based on a list of actors in a property in the profile, and hence is customizable.









It we look in the browser we can see that the helper profile created 3 different types of packages.

The main package is the use cases package where we can create use case diagrams with use cases for a feature, but there’s also an actors package, for shared actors, and a requirements package where we want the requirements to go. The new profile has no dependency on a fixed root structure, so this opens-up it up to be used more flexibly in existing projects.


The profile tailors Rhapsody so that the type of package is clear from its icon and category name, and ordering puts things where they’re wanted. Each of these package plays a clear role, and the right-click menus have been simplified accordingly.

We’re not going to use IBDs and BDDs in the use case model, so I’ve removed and simplified the right-click menus.

If all the packages are built with consistent and appropriate modelling constructs for their role, then a measure of uniformity will permeate through our models; making them easier to build, navigate and review. This leads to more consistent simplified usage.

The use case diagram is also customized and includes a process note giving advice.

As with the Generation 2 profile - the double-click will create a nested activity diagram pre-populated with a template to conveys consistency and gets people working straight away.

This Activity Diagram is tailored to focus on textual use case steps, with a simplified drawing toolbar that means that users can easily access the tools needed for the job, and are not left pondering some of the more eclectic activity diagram tools.



The Gen 3 profile includes a new auto-flow feature for requirements. If I create requirements they will flow automatically into the requirements package based on the dependencies between them.








Obviously, a system will be able to perform multiple use cases and multiple features. Imagine that the project is for a single system. We might have one user working on new use cases for a new feature in version 1 while another user works on use cases for a different feature for version 2.

We can create any number of use case packages. We’re using the use case package here to group use cases into features or functions that might constitute marketable commodities. We can run this command multiple times in the same project, giving a unique name to get package.
When I create a use cases package, the profile is intelligent enough to know that I already have an actors package, hence it will prompt me to make use of the actors in it.

We could choose also whether to create a separate requirements package, or not.

We now have two use case packages representing different features. However, they are in the same model and can share the same actors. Importantly, over time you need the flexibility to re-factor your use case model to include multiple features or requirement sets, but where different use case models might be initially built by different users.








As I’m probably going to deploy this with Rhapsody Model Manager the helper is experienced enough to know that by selecting Store in separate directory on the root packages we will make units easier to find on the file system later.









With the Generation 3 of the profile I’ve switch to using properties. The use of Rhapsody properties replaces the previous profiles use of tags under root packages with fixed names. This makes it simpler to configure and maintain, and more consistent with other profiles such as the SE toolkit.


You’ll notice I’ve also added a Perspective to the Properties pane so that you can view the properties for the profile. The defaults here are drawn from a property file in the profile folder, so can easily be changed.











If I tick Enable Gateway types for example, then I can create a stereotyped requirements package when I create the structure.

The goal here is to get the best balance between integration and isolation. One of the benefits of bringing users into the same project is to improve collaboration, especially as Rhapsody Model Manager brings the capability to view Rhapsody projects via the web client.













Requirement stereotypes work really well if we apply a Format to the stereotype. We can do this by right-clicking on the stereotype to access its Format… menu.


We can then say that when this stereotype is applied to a requirement, we want Rhapsody to colour it a specific colour like green.

We can now see that requirements with the stereotype applied are different from other requirements in the project. This gives us a visual cue that the requirement is related to a different specification or collection than other requirements in same project.
















The helper includes a Start link and End link helper that is intelligent to know which type of relation we want to create based on the elements we selected. Again, this is based on property settings in the profile. It can even populate them as it knows they’re on the diagram.



The final thing to note is that as well as automatically creating the structure, the helper is automatically maintaining a package diagram for the project. Here we can see that all the feature packages are sharing the same actors package.


With my automation help this could be one of several automatically maintained diagrams in the project.






This concludes the demo for now. It’s just a glimpse really of the stuff I’ve been working on to make the process of using and deploying Rhapsody to a large team, that little-bit simpler. I’ve just shown the requirements analysis method helpers, one of three stepping stones to achieving a working white box architecture traced to system requirements.

In today’s world many things are automated. Like Rhapsody’s built in Harmony/SE toolkit, the SysMLHelper brings the idea of automation to SysML modeling tasks. You want your team to be doing fun and creative tasks, in a consistent way, as part of a big team in a shared model without stepping on each other’s toes.

This requires more than installing a tool. You need a combination of a modelling language, tools, people, and process to come together. You essentially need a system. The more automation you can get into that system the faster it run, the better it will scale, and the more consistent and predictable its output will be. Consistency also shows that you are meeting your process which may be important for certification and quality assurance reasons.

Some of the methods are based heavily on IBMs Harmony process but use an open-source Rhapsody profile, meaning we have the option to tailor it to fit your organisation and business goals.
I can offer both consulting and training to take these ideas and make them work for your organisation, meaning less time spent trying to re-invent the wheel, and increasing your chance of achieving success earlier in your adoption lifecycle.

If you want to explore any of these ideas, then feel free to look at my www.mbsetraining.com website, or fire me an email.

Sunday 8 April 2018

Release of V2.2

This version is functionally-equivalent, to 2.1 (Release) and is considered a Release version and contains a fix to a threading issue that was causing creation of the functional block hierarchy to not work in Rhapsody 8.3.

The following changes were made for v2.2 on GitHub (released 08-APR-2018 - Release build):

#247 08-APR-2018: Fix threading issue crashing functional block hierarchy creation in 8.3 (F.J.Chadburn)
#248 08-APR-2018: Update copyright notice year to 2018 (F.J.Chadburn)

It is recommended for all users of the profile to move to this version to avoid this issue. The issues relate to limitations in the Rhapsody API when trying to get GUIs to work in separate threads. The solution took a little while, but came with the help of Andy Lapping. He's put a bit of background here:

http://www.merlinscave.info/Merlins_Cave/Tutorials/Entries/2018/4/3_Building_GUI_Based_Helpers_for_Rhapsody_-_Dealing_with_Multi-Threading.html