Tuesday, 7 July 2020

Pre-release V3.0.w, fixes to object files on Git hub

Latest on GitHub is now 3.0.x.PreRelease.

This is minor actual updates but important fixes to object files. Repo now aligned with correct object files/plus some icon fixes.

#258 07-JUL-2020: Modified the default RequirementsAnalysisPkg.types to work a different DOORS sample (F.J.Chadburn)
#259 07-JUL-2020: Added back in the missing icons (F.J.Chadburn)
#260 07-JUL-2020: Corrected object files so executablembse profile works correctly (F.J.Chadburn)
#261 07-JUL-2020: Fix to Gateway project builder so that it assumes .rpyx rather than .rpy projects (F.J.Chadburn)
#262 07-JUL-2020: Fix to remove spaces from ReferenceUnitPath properties so that they get set-up correctly (F.J.Chadburn)
#263 07-JUL-2020: Some FunctionalDesignProfile property file tweaks (F.J.Chadburn)
#264 07-JUL-2020: Update version number in txt files (F.J.Chadburn)

Thursday, 30 May 2019

Pre-release V3.0.w, incl. ExecutableMBSE and FunctionalDesign profiles

The configured release of V3.0.w on GitHub represents a major re-write. This is pre-release/beta and aligns with some new tutorial and support documentation material I have.

It's about 1.5 years of work resulting a reformed "family" of profiles, rather than a single profile. This includes a ExecutableMBSE profile which makes use of New Term packages and thus offers more flexibility to support many people working in the same model on different use case packages. There is also a separate FunctionalDesign profile which makes use of New Terms to simplify Rhapsody deployment when. A separate TauMigrator profile is a very specific project focused on transformation of Tau executable statemachines into Rhapsody equivalents as executable activities. There is also the SysMLHelper profile. This makes use of thread safety improvements and aims to preserve backward compatibility for users of the SysMLHelper. This has major refactoring in how the Swing dialogs are invoked based on some investigations/recommendations from dev team. As such, there's still some testing to do. However, I wanted to get latest configured on GitHub so that it's shareable.

#249 29-MAY-2019: First official version of new ExecutableMBSEProfile (F.J.Chadburn)
#250 29-MAY-2019: First official version of new FunctionalDesignProfile (F.J.Chadburn)
#251 29-MAY-2019: First official version of new TauMigratorProfile (F.J.Chadburn)
#252 29-MAY-2019: Implement generic features for profile/settings loading (F.J.Chadburn)
#253 11-SEP-2018: Move from using tags to properties to control plugin behaviour (F.J.Chadburn)
#254 11-SEP-2018: Moved .hep files into _rpy folder to enable default loading by profile (F.J.Chadburn)
#255 30-MAY-2019: Removed SysMLHelper (Rhp813-Archive).zip as no longer applicable (F.J.Chadburn)
#256 29-MAY-2019: Rewrite to Java Swing dialog launching to make thread safe between versions (F.J.Chadburn)
#257 11-SEP-2018: Move populate requirements/update verifications for SD(s) menus to Reqts menu (F.J.Chadburn)

Sunday, 23 September 2018

Driving Quality using Executable MBSE: An IBM Rhapsody Enlightenment webinar recording

In August I did a webinar as part of the IBM Rational Rhapsody enlightenment series that covers the slightly more advanced topic of executable MBSE.

The following is IBM's link to the recording (you need to give them your email address but it plays straight away):

The webinar was mainly demo. It focuses on using executable MBSE to build test case sequence diagrams and plans from use cases using a method I'm calling called ROCKETs (Requirements Originating in the Context of Knowledge from Executing Transitions on Statecharts). It's a method that works really well with Rational DOORS.

This shows a prototype of my v3 SysMLHelper, an open source plug-in and profile that intends to make the job simple. It's a much more advanced topic than the first video but I've tried to make it fun and interesting. I start with a blank model and transition through the process of making an executable model from nothing, finishing with use of an Add On to Rhapsody called "Test Conductor" that enables suites of tests to be built and executed.

Of course, this is possible without automation only I wouldn't be able to do in in 20 mins! It would take me weeks to build the model and explain how to build it. The automation I show is something I've been working on and using for over 3 years now. There is over 10K of Java and some of my best ideas for making working with Executable Model-based Systems Engineering (MBSE) and IBM Rhapsody Designer fun and relevant ;-)

Note: My next public training course is w/b 23rd Oct 2018 at HORIBA-MIRA (in concert with the Functional Safety Team).

Wednesday, 11 July 2018

Why are textual activity diagrams good for creating requirements from use cases?

One successful approach is to use one activity diagram per use case to document the steps of the use case, together with trigger, pre-conditions and post-conditions.

The benefits of the activity diagram are:
  1. Their flow-chart like syntax can be read with little or no training. This makes them the most easily consumable behavioural diagrams in UML and SysML and hence a good place to start engagement with stakeholders (caveat: stick to using control flow semantics). 
  2. Alternate flows of the use case can be expanded on the same canvas (either using interruptible regions or decision nodes) giving the diagram a power of analysis that is difficult with textual step-based approaches. “If you don’t actively attack the risks, the risks will actively attack you.” Tom Gilb, 1988

Importantly, using a model, until a Word document we can easily capture traceability between use case steps and textual requirements using SysML relations (satisfy, refine, and other dependencies). From the activity diagram we might create textual requirements. The activity diagram is the canvas in which we reconcile textual system requirements with the steps of a use case. They provide complementary views of the system from an external perspective.

Traceability in-situ has significant benefits, including:
  1. Increased ability to cope with requirements churn. When developing new ideas requirements churn cannot be ignored. When performing requirements definition work churn is both a good thing (we honed down the ideas to distil then into the essence of need) and a bad thing (if we can't cope with changes). Capturing traceability in-situ as part of the task makes it virtually effortless, whereas waiting until after the event makes it virtually impossible.
  2. Increased audit-ability and assurance of due diligence. This is necessary for safety-related processes such as ISO 26262 or for general process compliance to standards such as A-SPICE.
  3. Improved reviews. Requirements can be written so they are SMART (Specific Measurable Achievable, etc) but we have a picture of the story of a use case to put and read them in context, making the requirements review easier and more pleasurable (focusing not on the English grammar but more on the user experience).
In A-SPICE this relates to:

SYS.1.BP1: Obtain stakeholder requirements and requests. Obtain and define stakeholder requirements and requests through direct solicitation of customer input and through review of customer business proposals (where relevant), target operating and hardware environment,
and other documents bearing on customer requirements. [OUTCOME 1, 4]

SYS.1.BP2: Understand stakeholder expectations. Ensure that both supplier and customer understand each requirement in the same way. [OUTCOME 2]

NOTE 4: Reviewing the requirements and requests with the customer supports
a better understanding of customer needs and expectations. 

Traceability relates to:

SYS.1.BP5: Manage stakeholder requirements changes. Manage all changes made to the stakeholder requirements against the stakeholder requirements baseline to ensure enhancements resulting from changing technology and stakeholder needs are identified and that those who are affected by the changes are able to assess the impact and risks and initiate appropriate change control and mitigation actions. [OUTCOME 3, 6] NOTE 5: Requirements change may arise from different sources as for instance changing technology and stakeholder needs, legal constraints. 

Thursday, 28 June 2018

Why is the payload textual requirements?

Despite being an executable MBSE method, the ROCKET method takes a strong stance that the payload carried by ROCKETS will always be textual requirements. There are good reasons for this.

Firstly, textual requirements are a common currency (as a manager I know used to say a lot). Executable MBSE can be our money-making machine but - by producing and consuming textual requirements in models - we can spend the money in different shops (outside of the model). For example, we can put textual requirements into a Requirements Management (RM) tool like DOORS NG or DOORS 9, send them to suppliers in ReqIF format or using a exchange server, and we can  exchange them with test management tools like RQM. A focus on textual requirements as the formal hand-off eases adoption of MBSE techniques into an existing requirements-driven business or process and means that all those good things that come from a focus on well-formed and well written requirements still apply.

Secondly, SysML provides a very clear, mature and intuitive way of dovetailing textual requirements with models, well supported by tool integrations with RM tools. Textual requirements being neutral to modeling elements can be associated with different views of the same system, e.g., activity models, interaction views, state machine diagrams, use case models, static design, and interfaces. Requirements are omnipresent, in that they can be shown on any of the diagrams or meta-elements that make up the OMG SysML standard. You can use a host of SysML artefacts like tables, matrices, and diagrams to view and create traceability in a model to enable you to get the best of both worlds.

This shows a textual requirement tracing to an action and accept event action on a SysML/UML activity diagram:

This is the same behavior described in a SysML/UML sequence diagram:

This is the same requirement tracing to a transition on a SysML/UML state machine:

All of these examples of refinement at the same level of behavioural abstraction. Each transformation is done for a different purpose.

Importantly, since the same textual requirement can be associated with different modelling elements this enables something very subtle but inherently powerful to occur in the ROCKET method, which is the transformation of model data from one form to another. Each of these behavioural representations is representing the same requirements in a different way. Like stepping stones (moon hops) over a stream, they enable you to get to the other side without getting wet (which is never a good thing when it comes to being a systems engineer ;-). You can even make a choice about whether to use the stones again or throw them away. Use case steps can be transformed into operations and events and put into state-machines that can be executed to create test scenarios that show operations and events, that can be copied to Blocks that represent architectural components, which by virtue of the copy will trace to the same requirements, so that when you create a sub-system requirement the traceability to the higher-level requirement can be done in-situ.

Since the method is highly formalised, highly systematic and thus repeatable. By systematic, I mean that it involves lots of little steps (nope, not a giant leap for man/human kind ;-). The systematic nature of the model transformations means they can be supported by automation that speeds it up. Engineers can focus on engineering decisions about their problem, rather than how to build models or use the tool, and tasks of refining the requirements can be optimised into phases (don’t worry about executing a model until you’ve actually spoken to stakeholders to validate the needs and art of the possible). Hurrah! What's not to like?

The consistency of the systematic approach and the use of automation leads to a production line mentality. Models that come off the line will have common layouts and make consistent use of the SysML metamodel. This makes external audits a doddle and means that engineers working on one model can easily switch to a model created by different teams using the same method and tools, enabling you to move resources easily between projects. Systematic approaches can be documented and thus used for quality assurance purposes enabling external audit like 26262 and A-SPICE for automotive, or FDA approval (of medical devices).

Phew! (I'll take a breather, stay tuned for more...)

Monday, 25 June 2018

Introducing ROCKETS: An executable MBSE process

What is the ROCKETS MBSE process?

I've decided to name my MBSE method ROCKETS. ROCKETS is a derivative of the the Harmony/SE method that uses executable MBSE with SysML and IBM Rational Rhapsody. It stands for Requirements Originating in the Context of Knowledge Executed as Transitions on Statecharts. It’s designed to be fun. To make it fun a couple of aspects are considered important:

  1. Automation, so that the user is focused on the creative aspects associated with solving the problem, rather than setting up the model.
  2. Systematic, the method splits the problem down. The adage is to take tiny steps rather than a giant leap.
  3. Team work. How people interact during the process is considered as important as the artefacts that are produced.

The approach is use case driven, and conforms to the original definition of a use case as:
“A description of a set of sequences of actions, including variants, that a system performs that yield an observable result of value to an actor” [Booch, Rumbaugh, Jacobson 1999] (where actors are entities outside of the system that gain value from or participate in the use case)

Importantly, use cases are stories about how the system is intended to be used. ROCKETS is therefore a method focused on concept of operations (CONOPS) modeling. CONOPS models describe clearly and concisely what the is to be accomplished and how it will be done using available resources. As such it is independent of aspects such as programming language and software design. We are trying to model who needs to do what, rather than how it is done.

What are the phases of the ROCKET process?

There are three phases to the ROCKET process:

  1. Payload definition.
  2. Launching the rocket(s).
  3. Assembling the space station.

The key challenge overcome from the ROCKETS process is the management of complexity; the division of responsibility across a team and between teams of engineers such that the task can be accomplished with productive use of resources. In practice these phases occur in parallel, as we can’t get the whole space station into a single rocket, we need to send up a series of rockets.

The space station analogy is a useful one because we can picture it as a series of components that are interconnected. Importantly, we might read the components as component owners. This is a systems engineering task to allocate work to different teams by deciding which team needs to do what.

Payload definition starts on the ground with the creation of functional system requirements that we intend to launch up to the space station. To get them to the space station we need to launch them. We do this with executable MBSE. Much like how we might hypothesis how a clock might work, executable MBSE involves building a simulation that we can interact with to test how the system may work. We then need to get the payload into space in a rocket where they will be assembled by a separate team.

What is the relationship to process standards?

ROCKETS process aligns directly with Sys.1 to Sys.5 of A-SPICE 3.

SYS.1 Requirements Elicitation
SYS.2 System Requirements Analysis
SYS.3 System Architectural Design
SYS.4 System Integration and Integration Test
SYS.5 System Qualification Test

Friday, 22 June 2018

Gen #3 profile testing with TestConductor

I've concluded first round of enhancements focused on being able to move sequence diagrams involving interacting with the simulation into test cases in test conductor. I actually found more enhancements were needed than expected, hence a few changes from the Gen #2 profile were needed. Main thing was to make use of Interfaces on ports (rather than use rapid ports). This introduced an issue I hadn't considered around inheritance in framework. However, that's all solved with a few changes to the helper.