Blog

  • The Easy Framework for Conceptual Modelling: a Map for Sensemaking

    [1] Y. Wand, Yair, Weber, and Ron, “Research commentary: information systems and conceptual modeling-a research agenda,” Information Systems Research, vol. 13, p. 363, Jan. 2002.
    Y. Wand, Yair, Weber, and Ron, “Research commentary: information systems and conceptual modeling-a research agenda,” Information Systems Research, vol. 13, p. 363, Jan. 2002.

    Have you ever found your conceptual model leading to confusion and frustration instead of the clarity you where aiming at? Or, while searching for the right modelling diagram, have you been overwhelmed by all the SysML, BPMN and ArchiMate of the internet ?
    You are not alone. As highlighted above, Conceptual Modelling often fails. Further, as put by [Estefan 2007], “the conceptual modeling of complex systems and processes has not yet converged on a unified, consolidated modeling framework”.

    In my previous post, I explored why conceptual modelling matters – as it allows Engineers to make sense of the World and of the Machine. I’ve named Sensemaking this cognitive process. But knowing why isn’t enough – we need to know what to model and how the pieces fit together.

    That’s where the Easy Framework for Conceptual Modelling comes in. It’s not a comprehensive modelling language like UML or a complete methodology like TOGAF – or a universal toolbox as in the IREB Syllabus for Requirements Modelling. Instead, it’s a practical toolkit tailored specifically for market-driven, software-intensive, user-facing product engineering – focused on what teams actually need so that they can think together.

    In this post, I’ll unveil the structure of this framework, showing how its components fit together to support both problem understanding and solution design. In later posts, I’ll dive deeper into each component, providing examples, notation guides, and practical tips.

    Let’s map out the territory of our Sensemaking journey.

    The Need for Structure

    As I shared in my framework introduction, one of the key reasons conceptual modelling rarely gets done is the vastness of modelling. Without structure and constraints, modelling efforts lack focus.

    The EFCM framework (Easy Framework for Conceptual Modelling) addresses this by providing a curated set of models organized around a simple principle: understanding the spaces of both the problem and the solution. This structure gives teams a clear starting point and guidance for making sense of the world and the machine.

    Diagram showing how Sensemaking is concerned with the World.
Here the graphviz script:
digraph WhySenseMaking {
    edge[dir=back];
    style=filled;
    color=lightgrey;
    node [style=filled,color=white];
    compound=true;
    subgraph cluster_world {
        World -> Machine -> User[label="interacts with" ];
        World -> User[label="is in" ];
        World -> Goal[label="in" ];
        Goal -> User [label="has" ];
        label = "The world";
    }
    subgraph cluster_Stakeholders {
        Sensemaking -> Engineer [label="needs" ];
        label = "The team";
    }
    Engineer -> Machine [dir=forward label="builds"];
    World -> Sensemaking [label="of" ltail="cluster_world"];
}

    But unlike comprehensive frameworks that attempt to model everything, the EFCM framework is deliberately constrained to what’s essential for user facing and market-driven product engineering. It’s to be learned and applied quickly, avoiding the overwhelm that comes with more complex approaches.

    My Conceptual Modelling framework is largely based on the excellent work of Klaus Pohl and his team [Pohl 2010]. I’ve based the IREB training material I’ve created in 2009 on the previous German version of this book – below my latest copy. I had the occasion already to express my gratitude to Klaus. Now this book has about 800 pages and, similarly as the IREB contents, it is intentionally universal, and can be somewhat abstract at times. Because of its narrower application scope, I intent my toolkit to be more approachable and focused.

    K. Pohl 2010: Requirements Engineering – Fundamentals, Principles, and Techniques

    Framework Overview: Three Sets of Models

    The EFCM framework organizes the models into three categories:

    1. Models of the Problem: Understanding the context of the machine, the goals of its users, and the project stakeholders
    2. Models of the Solution: Representing the data, behavioural, and functional perspective on the machine
    3. Supporting Views: Providing foundation and validation for the other models

    PlantUML Syntax: @startmindmap
+_ Framework .
++_ . Structure .
+++_ . View
+ Conceptual\nModels
++ of the\nproblem
+++ Context diagram
+++ Goal model
+++ Stakeholder map
++ of the\nsolution
+++ Static domain model, \na.k.a. data model
+++ Behavioural model
+++ Functional model
++ Supporting\nviews
+++ Glossary
+++ Scenario

@endmindmap

    This structure reflects a fundamental principle: in order to solve a problem, you need to understand the problem.

    First the problem, then the solution?

    My favorite LLM Claude 3.7 Sonnet has proposed the following phrase before diving into solutions, teams need to understand the problem they’re solving”. This reflect the historical phased view of engineering and the “building” metaphor – which is not appropriate in our context. “Before” is wrong.
    As described by Bashar Nuseibeh with his Twin Peaks model [Bashar Nuseibeh 2001], the exploration of the problem space and solution space happens iteratively and incrementally – together.

    I insist here because it has a major impact on Systems Engineering and Requirements Engineering activities – and on the structure of our Framework.

    Of course, the team needs to understand the problem in order to think about the machine. However, the other way is also key: understanding the machine leads to the discovery of the actual problem. Let’s take an example. One of the problem a car engineer is facing here is back pain, which obviously belonging to the World. Where does the back pain problem come from? Does it come for the purpose – which is move from point A to point B? If the solution would have been a Quantum teleportation portal, you surely would have problems to deal with – dizziness maybe? random disintegration? – no reason you’d have to deal with back pain.
    Back pain comes from the solution, here the car, which forces the driver to sit still for a long time. The problem recursively comes from the machine.
    Now that we’re clear on the importance of not addressing problem and solution sequentially, but in an iterative and incremental way, let’s explore the problem space within our EFCM framework.

    Modelling the Problem: Understanding the World

    Problem models help teams make sense of the World in which the Machine operates. They answer questions about why we’re building something, for whom, and in what context.

    Context Diagram

    The Context Diagram shows how the Machine interacts with the entities of its environment, identifying actors and interfaces. It answers questions like:

    • What is the machine? How do we name it? What are its boundaries?
    • What is the context of the machine? What actors and users are interacting with the machine? With whom else are they interacting?
    • What is flowing between the machine and its context?

    This model is foundational because it establishes the scope and boundaries of the machine, enlightening the relevant part of the World. As we’ll see in a dedicated post, the key here is to think “flow” – of matter, energy, and information – anything flowing through the boundaries of the machine. I’ll go in more details in a next dedicated post.

    Goal Model

    The Goal Model captures the objectives of both the users of the machine, and of the stakeholders of the project. It shows how these goals decompose and relate to each other. It answers questions like:

    • What do the users of our machine want to achieve? What are their essential pains and needs? What does our engineering organization want to achieve?
    • How do goals relate to each other? What is the “why-how” hierarchy of our goals and pains? Can we hierarchically categorize – “essentialize” – our goals?
    • What are potential conflicts or trade-offs?

    By making goals explicit, this model helps teams align on purpose and priorities. We’ll see that the causality relation plays here a key role, and we’ll discover what we can gain of making these relations explicit.

    Stakeholder Map

    The Stakeholder Map identifies the individuals and groups who have an interest in the success (or the failure) of the engineering project, along with their relationships and concerns. It answers questions like:

    • Who will be involved in the success of your product on the market? Who has the potential to arm your project – both within your organization and outside of it?
    • What are their primary concerns and interests?
    • How do they relate to each other? Who are the opinion leaders?

    This model ensures teams consider all perspectives and helps prevent later surprises from overlooked stakeholders. Put simply, my practical definition of a stakeholder is any person or groups who can harm your project. You should better engage with them.
    Now that we’ve covered the three models of the problem, let’s dive into the solution.

    Models of the Solution: Designing the Machine

    Solution models focus on the Machine we’re building, documenting its characteristics from complementary perspectives.

    [Maiden and Sutcliffe 1992] explain that “Domain abstractions represent the fundamental behaviour,structure and functions of a domain class”. Later on, [Pohl 2010] reports three key perspectives on a software-intensive solution: Data, Functional and Behavioural.
    Let’s look at our machine from these three perspectives.

    Static Domain Model

    The Static Domain Model (also called the “data perspective” by the [IREB], and the “structural view” by [UML]) defines the entities of the Machine’s context, their relationship and structure. It answers questions like:

    • What entities are relevant for our machine?
    • Are some of these entities composed together – like “A bicycle has two wheels“?
    • Do they share some common essence – like “A bicycle is a vehicle”?

    This model provides the vocabulary – the names, that is, the “cognitive building blocks” – for thinking the solution, its behaviour and its functions.

    Behavioural Model

    The Behavioural model shows how the machine evolves over time in response to events. It answers questions like:

    • What states can the machine be in?
    • What events trigger transitions between states?
    • What conditions must be met for transitions to occur?

    We’ll see how the hierarchy of states permitted by UML’s Statecharts can help abstracting the essential states of a Machine, and how this leads to consistency in the machine’s behaviour, and better learnability of its usage.

    Functional Model

    The Functional Model captures what the system does – its functions. It answers questions like:

    • What functions does the system provide?
    • How do inputs flow towards outputs?
    • How do functions interface to each other?

    We’ll see that the functional model, which I document with [data flow diagrams 1979], brings the team to understand and agree on their machine’s essential functions and how they fit together. This strongly facilitates further activities, like architecture decomposition, problem solving, and system verification.

    Supporting Views: Foundation and Validation

    Supporting views strengthen the other models by providing conceptual foundation and scenarios for validation.

    Glossary

    The Glossary defines key terms used across all models, ensuring consistent understanding. It answers questions like:

    • What do we mean by specific terms?
    • How do these terms relate to each other?
    • Are there synonyms or homonyms we need to clarify?

    An engineering project is a learning process – the team needs to discover and learn how to solve the problem at hand. By establishing a shared vocabulary, the glossary prevents misunderstandings and enables this collective cognition effort.

    We’ll see how the “universal glossary” is .. a myth. We’re dealing with people, who think and communicate in natural language, which, per definition, is ambiguous. We need to accept it and be agile in our glossary documentation – as I’ll describe in a dedicated post.

    Scenario

    Scenarios describe concrete examples of how an actor would attempt to fulfil its goal with the machine, in narrative form.
    A scenario is a story. We Homo sapiens have grown and evolved with stories. We are very good at telling, remembering and understanding stories. Scenarios are a great tool as they make use of this natural facility, vividly bridging the gap between user goals and the models of the solution.

    Scenario answer questions like:

    • How would the machine be used when willing to achieve a specific goal?
    • What can happen? What sequence of interactions would occur to achieve my goal? What can go wrong?

    Scenarios bring the models of the solution to life, and provide a way to validate that the machine addresses the intended problems.

    How These Models Work Together

    While I’ve presented these models separately, they form an integrated whole, connected and interdependent:

    • The Context Diagram enlightens the part of the World with which the Machine interacts – it will guide all other models
    • The Goal Model is the source of the Scenarios, and the rationale for the Functional Model
    • The Stakeholder Map discloses the team with which you’ll check your models – starting with your Goal Model
    • The Static Domain Model defines the entities with which your team will think the Behavioural Model and the Functional Model
    • The Glossary ensures conceptual consistency across all models
    • Scenarios validate that the Solution Models address the goals

    These connections create a web of understanding, where insights from one model inform and constrain others. It is a great framework for your incremental, iterative and collaborative engineering journey – all the way up to product delivery.

    Getting Started: A Practical Approach

    In your modelling journey I recommend starting with these steps:

    1. Begin with the Context Diagram to enlighten the World
    2. Create an initial Glossary to align terminology early
    3. Develop the Goal Model to understand users and their objectives
    4. Draft the Static Domain Model to establish core concepts
    5. Add other models as needed based on your specific challenges

    Remember that models are tools for thinking, not deliverables to check off a list. Draw them together on the paper board, as a unique opportunity to learn, think and agree together.

    Next Steps: Diving Deeper

    In upcoming posts, I’ll explore each of these models in detail, sharing:

    • Notation guidelines and conventions
    • Examples
    • Common pitfalls and how to avoid them
    • Facilitation techniques for collaborative modelling
    • Tool recommendations and practical tips

    I’ll also share how I’ve experienced this framework to facilitate Sensemaking in complex engineering environments.

    Conclusion: Your Framework for Sensemaking

    The Easy Framework for Conceptual Modelling provides a structured approach to making sense together. By organizing models into problem space, solution space, and supporting views, it gives teams a clear path for understanding both the World and the Machine.

    Unlike more complex frameworks, the EFCM framework is designed to be accessible and immediately applicable in market-driven product engineering contexts. It focuses not on comprehensive documentation but on facilitating the cognitive journey of the team.

    What’s your experience with conceptual modelling? I’d love to hear your thoughts and experiences in the comments or on LinkedIn.

    In my next post, I’ll dive into the Context Diagram – the foundation for understanding your Machine in the World. Stay tuned!

    References

    [Archimate] ArchiMate – an open and independent enterprise architecture modeling language

    [Bashar Nuseibeh 2001] “Weaving Together Requirements and Architectures” https://ieeexplore.ieee.org/document/910904 – model duplicated in https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6470589

    [BPMN] Business Process Model and Notation – a graphical representation for specifying business processes in a business process model

    [Data flow diagrams 1979] Tom DeMarco‘s, part of structured analysis. See https://en.wikipedia.org/wiki/Data-flow_diagram

    [Estefan 2007] Estefan, J. 2008. A Survey of Model-Based Systems Engineering (MBSE) Methodologies, Rev. B. San Diego, CA, USA: International Council on Systems Engineering. INCOSE-TD-2007-003-02. Image gotten from https://sebokwiki.org/wiki/Integrating_Supporting_Aspects_into_System_Models. (thank you SEBOK :-))

    [IREB Syllabus for Requirements Modelling] https://cpre.ireb.org/en/concept/requirements-modeling

    [Pohl 2010] K. Pohl: Requirements Engineering – Fundamentals, Principles, and Techniques. Springer, New York, 2010

    [SysML] Systems Modeling Language – a general-purpose modeling language for systems engineering applications – based on [UML]

    [TOGAF] The Open Group Architecture Framework – a framework for enterprise architecture that provides an approach for designing, planning, implementing, and governing an enterprise information technology architecture

    [UML] Unified Modeling Language – is a general-purpose visual modeling language for software engineering

  • Graph to Text with AI

    In my journey exploring Easy Conceptual Modelling, I want to understand how AI can help. In this post I’ll look into whether a general-purpose LLM can translate a simple graph into its textual representation. Here my sample input graph.

    The graph as image.
    Sample hand drawn graph

    The answer is – spoiler alert – yes. On its initial response, one such LLM – Claude 3.5 Sonnet – has produced a textual script with only one essential semantic error. It took three more prompt rounds for the perfect output below.

    digraph G {
    compound=true;
    edge[dir=back];
    
    // Define subgraph for The Environment
    subgraph cluster_environment {
        label="The Environment";
        style=solid;
        
        World;
        Goal;
        User;
        Machine;
        
        // Edges within The Environment
        World -> Goal [label="in"];
        Goal -> User [label="has"];
        Machine -> User [label="interacts with"];
        World -> User [label="is in"];
        World -> Machine [label="interacts with"];
    }
    
    // Define subgraph for The Team
    subgraph cluster_team {
        label="The Team";
        style=solid;
        
        Sensemaking;
        Engineer;
        
        // Edges within The Team
        Sensemaking -> Engineer [label="needs"];
    }
    
    // Cross-subgraph edges
    World -> Sensemaking [label="of", ltail=cluster_environment];
    Machine -> Engineer [label="builds"];
}
    Claude – PromptB – Result4


    OK, nothing groundbreaking, feel free to skip and leave unless you’re interested in the details.

    Why I am looking into this?

    I believe that AI and LLMs will relieve engineers of repetitive and time-consuming tasks, allowing engineers to focus on conceptual work, like Sensemaking and Conceptual Modelling, which are the focus of my blog.

    Going one step further, I wanted to check how AI can save conceptual modelers from tedious tasks – here translating a graph into its textual representation. Without claiming this is the best way – on the contrary, there are likely more efficient methods – I found this would be a nice way to start evaluating AI.

    In the rest of the post

    I will:

    • introduce the Graphviz graph visualization software;
    • explain the benchmarking procedure
    • show the result for a few standard LLMs
    • and conclude with what we’ve learned

    What is Graphviz?

    Graphviz is an open source graph visualization software. See https://graphviz.org/. A graph, like the ones shown above, has a set of nodes, which may be grouped into subgraphs, both of them may be linked by edges. DOT is the plain text graph description language that serves as input for Graphviz. Graphviz renders DOT text into visual diagrams. Below an example Graphviz DOT text.
    Apart from my hand drawing, all graph images in this post have been rendered by Graphviz.

    How to see the Graphviz scripts?

    On my blog’s version of this post, I have included the Graphviz script as HTML ‘title’ attribute for each image, which is you can view as follows:

    • On Android Chrome and Firefox, a long press on the image displays a pop up with a partial ‘title’ attribute. On Chrome, tapping the attribute again reveals the full text.
    • On both Firefox and Chrome on my Linux laptop the ‘Title attribute’ pops up when I hover my mouse on the picture.

    The LinkedIn version of this article may not offer this possibility.

    What is AI

    AI stands for Artificial Intelligence. Large Language Models (LLMs) are AIs that can process and generate text. I’ll evaluate the following LLMs:

    • Claude 3.5 Sonnet (free version)
    • ChatGPT GPT-4o (free version – short name for GPT-4 Omni)
    • DeepSeek V3 (free version)
    • Mistral AI – Le Chat (free version)
    • Perplexity (payed subscription) with various options.

    I ran these queries between 6 and 10 February 2025.

    Benchmark procedure

    I’ll first phrase a prompt requesting the LLM to convert a graphical representation of a graph into the Graphviz DOT syntax.

    Then, for each LLM, starting from a blank session:

    1. I’ll prompt the LLM as above;
    2. I’ll render the output script with GraphvizOnline;
    3. I’ll upload the resulting image on this post (see above How to see the Graphviz scripts).
    4. I’ll assess briefly the result;
    5. I may attempt to fix the issues;

    Until I conclude the article.

    Assessment Criteria

    I’ll assess the quality of the conversion if the rendered graph:

    1. is semantically correct – that is, the nodes, subgraphs, edges; their labels; and the direction, source & target of the edges match the source graph.
      Here I’ll distinguish between:
      • essential errors – the transcription of the graph into the meaningful list of elements listed above has errors;
      • accidental errors – the translation of the list above into Graphviz syntax has errors.
    2. is readable (by a human)
    3. is understandable (by a human)
    4. reasonably respects the representation conventions of the input image (for example arrows going upwards).

    The Target

    Here is the target graph in Graphviz DOT syntax.
    With “Target” I mean the version, written by me manually, which I consider would be an excellent result if it would be generated by an LLM.

    digraph WhySenseMaking {
      edge[dir=back];
      compound=true;
      subgraph cluster_environment {
        World -> Machine -> User[label="interacts with" ];
        World -> User[label="is in" ];
        World -> Goal[label="in" ];
        Goal -> User [label="has" ];
        label = "The Environment";
      }
      subgraph cluster_Stakeholders {
        Sensemaking -> Engineer [label="needs" ];
        label = "The Team";
      }
      Machine -> Engineer [label="builds"];
      World -> Sensemaking [label="of" ltail= "cluster_environment"];
    }

    Which GraphvizOnline renders as:

    digraph WhySenseMaking {
  edge[dir=back];
  compound=true;
  subgraph cluster_environment {
    World -> Machine -> User[label="interacts with" ];
    World -> User[label="is in" ];
    World -> Goal[label="in" ];
    Goal -> User [label="has" ];
    label = "The Environment";
  }
  subgraph cluster_Stakeholders {
    Sensemaking -> Engineer [label="needs" ];
    label = "The Team";
  }
  Machine -> Engineer [label="builds"];
  World -> Sensemaking [label="of" ltail= "cluster_environment"];
}
    Target graph – a Graphviz script I have written as reference.

    PromptA

    My initial prompt (PromptA) was:

    Attached the image of a graph. Please convert this image into Graphviz DOT syntax.

    Claude 3.5 Sonnet – PromptA

    Result:

    digraph G {
  "World" -> "Goal" [label="in"];
  "Goal" -> "User" [label="has"];
  "User" -> "World" [label="is in"];
  "User" -> "Machine" [label="interacts with"];
  "Machine" -> "World" [label="interacts with"];
  "Engineer" -> "Machine" [label="builds"];
  "Sensemaking" -> "Engineer" [label="needs"];
  "Sensemaking" -> "Machine" [label="of"];

  subgraph cluster_0 {
    label = "The Environment";
    "World";
    "Goal";
    "User";
    "Machine";
  }

  subgraph cluster_1 {
    label = "The Team";
    "Sensemaking";
    "Engineer";
  }
}
    Claude – PromptA – Result1

    Assessment:

    • Four semantic errors:
      • 3 essential errors: wrong edge directions
      • 1 accidental error: The edge “Sensemaking of” wrongly targets “Machine” instead of the subgraph “The Environment”.
    • readable, understandable, respect of the conventions: not perfect. Some wriggling edges.

    Even if, for a first simple prompt, I find this first result surprisingly good, I am not satisfied and I suspect my prompt needs more guidance, so I rework it.

    PromptB

    In this new prompt I am much more explicit to the LLM about each step it has to perform – requesting it lists, as intermediate steps, all graph elements, especially the edges, their labels and directions.

    Attached the image of a graph. Please convert this image into Graphviz DOT syntax with following steps:

    1. list all subgraphs and their names;
    2. for each subgraph, list each contained nodes and their names;
    3. for each node, identify each outgoing edges;
      3.1 For each such edge, list the edge label, the source node, the target entity, and tag whether the target entity is a node or a subgraph;
    4. generate the corresponding Graphviz DOT syntax script.

    Claude 3.5 Sonnet – PromptB

    Here the result of PromptB (see above How to see the Graphviz scripts?):

    digraph G {
    compound=true;
    rankdir=LR;
    
    // Define subgraph for The Environment
    subgraph cluster_environment {
        label="The Environment";
        style=solid;
        
        World;
        Goal;
        User;
        Machine;
        
        // Edges within The Environment
        Goal -> World [label="in"];
        User -> Goal [label="has"];
        User -> Machine [label="interacts with"];
        Machine -> World [label="interacts with"];
    }
    
    // Define subgraph for The Team
    subgraph cluster_team {
        label="The Team";
        style=solid;
        
        Sensemaking;
        Engineer;
        
        // Edges within The Team
        Engineer -> Sensemaking [label="needs"];
    }
    
    // Cross-subgraph edges
    Sensemaking -> cluster_environment [label="of", lhead=cluster_environment];
    Engineer -> Machine [label="builds"];
}
    Claude – PromptB – Result1

    Assessment:

    • Two semantic errors:
      • 1 essential error – that is, one error in the graph analysis: the “is in” edge between User and World is missing.
        I find this is good and very encouraging.
      • 1 accidental error – that is, one transcription error into the Graphviz syntax;
        • The “of” edge from “Sensemaking” wrongly targets a node called “cluster_environment”. This is due to Graphviz’s complex syntax for pointing to subgraphs. In my tests, no LLM has managed to succeed this syntax.
    • Readable, understandable, respect of the conventions: Not yet OK. I want my vertical bottom to top input view. And the arrows pointing up.

    Fixing the errors

    I’ll be here taking the LLM by the hand and point at the errors one by one, explicitly and step by step. For this script size It would be faster for me to do it by hand – the LLM is no longer helping.

    Here the first correcting prompt:

    Please fix the two following errors:
    1rst fix: The syntax from the “of” edge from “Sensemaking” to the subgraph “The Environment” is wrong. The DOT syntax of Graphviz is somewhat peculiar here: you need to set a node as target, within the intended target subgraph. Here best node would be “World”. So the corresponding DOT script would be: /// Sensemaking -> World [label=”of”, lhead=cluster_environment]; ///

    2nd fix: Please add the missing edge /// User -> World [label=”is in”]; ///

    Below the result – the LLM has fixed the two semantic errors as prompted. No semantic errors:

    digraph G {
    compound=true;
    rankdir=LR;
    
    // Define subgraph for The Environment
    subgraph cluster_environment {
        label="The Environment";
        style=solid;
        
        World;
        Goal;
        User;
        Machine;
        
        // Edges within The Environment
        Goal -> World [label="in"];
        User -> Goal [label="has"];
        User -> Machine [label="interacts with"];
        User -> World [label="is in"];
        Machine -> World [label="interacts with"];
    }
    
    // Define subgraph for The Team
    subgraph cluster_team {
        label="The Team";
        style=solid;
        
        Sensemaking;
        Engineer;
        
        // Edges within The Team
        Engineer -> Sensemaking [label="needs"];
    }
    
    // Cross-subgraph edges
    Sensemaking -> World [label="of", lhead=cluster_environment];
    Engineer -> Machine [label="builds"];
}
    Claude – PromptB – Result2


    I now want to improve the rendering – my target is to reproduce the input image. Here the new prompt:

    Cool, this is now semantically perfect. Please do following decoration changes:

    1. remove the /// rankdir=LR; /// statement since I want the diagram to show vertically;
    2. now I’d like to vertically flip the orientation of the graph, so that the User and Engineer nodes are at the bottom, and all the arrows are pointing upwards. To achieve this result, please do the following:
      • First, you revert the direction of pointing of all arrows. To do this, please insert the directive /// edge[dir=back]; /// at the top of the DOT script;
      • Then, please revert the source and target elements of all edges. As example, /// User -> Goal [label=”has”]; /// shall become /// Goal -> User [label=”has”]; ///

    Having the tool take care of reverting source & target on all edges saves significant time. The result is almost perfect:

    digraph G {
    compound=true;
    edge[dir=back];
    
    // Define subgraph for The Environment
    subgraph cluster_environment {
        label="The Environment";
        style=solid;
        
        World;
        Goal;
        User;
        Machine;
        
        // Edges within The Environment
        World -> Goal [label="in"];
        Goal -> User [label="has"];
        Machine -> User [label="interacts with"];
        World -> User [label="is in"];
        World -> Machine [label="interacts with"];
    }
    
    // Define subgraph for The Team
    subgraph cluster_team {
        label="The Team";
        style=solid;
        
        Sensemaking;
        Engineer;
        
        // Edges within The Team
        Sensemaking -> Engineer [label="needs"];
    }
    
    // Cross-subgraph edges
    World -> Sensemaking [label="of", lhead=cluster_environment];
    Machine -> Engineer [label="builds"];
}
    Claude – PromptB – Result3

    Lastly, it became confused again with the cumbersome syntax for pointing at subgraphs AND with reverted edges.

    New prompt:

    One last bit, please replace “lhead” by “ltail” in the script line
    /// World -> Sensemaking [label=”of”, lhead=cluster_environment]; ///

    digraph G {
    compound=true;
    edge[dir=back];
    
    // Define subgraph for The Environment
    subgraph cluster_environment {
        label="The Environment";
        style=solid;
        
        World;
        Goal;
        User;
        Machine;
        
        // Edges within The Environment
        World -> Goal [label="in"];
        Goal -> User [label="has"];
        Machine -> User [label="interacts with"];
        World -> User [label="is in"];
        World -> Machine [label="interacts with"];
    }
    
    // Define subgraph for The Team
    subgraph cluster_team {
        label="The Team";
        style=solid;
        
        Sensemaking;
        Engineer;
        
        // Edges within The Team
        Sensemaking -> Engineer [label="needs"];
    }
    
    // Cross-subgraph edges
    World -> Sensemaking [label="of", ltail=cluster_environment];
    Machine -> Engineer [label="builds"];
}
    Claude – PromptB – Result4

    Assessment:

    • zero semantic errors
    • readable, understandable: perfect.
    • respect of the conventions: excellent.
    • But – I have needed 3 correcting prompt rounds

    Here the source image for comparison:

    Sample hand drawn graph


    Let’s move to next LLM.

    Perplexity with GPT-4 Omni – PromptB

    digraph Graph {
    // Define subgraph for "The Environment"
    subgraph cluster_Environment {
        label = "The Environment";
        World [label="World"];
        Goal [label="Goal"];
        User [label="User"];
        Machine [label="Machine"];
    }

    // Define subgraph for "The Team"
    subgraph cluster_Team {
        label = "The Team";
        Sensemaking [label="Sensemaking"];
        Engineer [label="Engineer"];
    }

    // Edges within and between subgraphs
    World -> Goal [label="in"];
    World -> Machine [label="interacts with"];
    Goal -> User [label="has"];
    User -> Goal [label="is in"];
    Machine -> World [label="interacts with"];
    Machine -> Sensemaking [label="of"];
    Sensemaking -> Engineer [label="needs"];
    Engineer -> Machine [label="builds"];
}
    Perplexity-GPT-4 Omni – PromptB – Result1

    Assessment:

    • One syntax error – noticeably, this is the single time. The digraph name “Graph” is invalid because it’s a reserved keyword in the language. Changing to name “G”, I can render the graph.
    • 6 semantic errors – all with edges (as most of the case).

    With too many errors, I move on to the next LLM.

    Perplexity, Deepseek R1 – Prompt B

    digraph G {
    // Subgraph for The Environment
    subgraph cluster_Environment {
        label = "The Environment";
        World [label="World"];
        Goal [label="Goal"];
        User [label="User"];
        Machine [label="Machine"];

        // Edges within The Environment
        World -> Goal [label="in"];
        World -> Machine [label="interacts with"];
        Goal -> World [label="is in"];
        Goal -> User [label="has"];
        Machine -> World [label="interacts with"];
    }

    // Subgraph for The Team
    subgraph cluster_Team {
        label = "The Team";
        Sensemaking [label="Sensemaking"];
        Engineer [label="Engineer"];

        // Edges within The Team
        Sensemaking -> Engineer [label="needs"];
    }

    // Edges between subgraphs
    Sensemaking -> Machine [label="of"];
}
    Perplexity-DeepSeekR1- PromptB – Result1

    I count eight semantic errors, all involving edges.
    Too many errors, I move on to next LLM.

    GPT-4o – PromptB

    From chatgpt.com, no paying subscription. GPT-4o means “GPT-4 Omni”.

    digraph G {
    // Subgraph: The Environment
    subgraph cluster_environment {
        label = "The Environment";
        World;
        Goal;
        User;
        Machine;

        // Edges in The Environment
        World -> Goal [label="interacts with"];
        Goal -> World [label="is in"];
        Goal -> User [label="has"];
        User -> Goal [label="is in"];
        Machine -> World [label="interacts with"];
        Machine -> User [label="interacts with"];
    }

    // Subgraph: The Team
    subgraph cluster_team {
        label = "The Team";
        Sensemaking;
        Engineer;

        // Edges in The Team
        Sensemaking -> Engineer [label="needs"];
        Engineer -> Machine [label="builds"];
    }
}
    GPT-4o- PromptB – Result1

    Assessment:

    I count about six semantic errors. I wonder if an LLM could be used to automate error-counting – perhaps another time.

    Next day, same GPT-4o LLM, same PromptB:

    digraph G {
    subgraph cluster_Environment {
        label = "The Environment";
        World;
        Goal;
        User;
        Machine;
        
        World -> Goal [label="in"];
        Goal -> User [label="has"];
        User -> Machine [label="interacts with"];
        Machine -> World [label="interacts with"];
    }
    
    subgraph cluster_Team {
        label = "The Team";
        Sensemaking;
        Engineer;
        
        Sensemaking -> Engineer [label="needs"];
    }
    
    Engineer -> Machine [label="builds"];
    Machine -> Sensemaking [label="of"];
}
    GPT-4o- PromptB – Result2

    Although the number of errors is about the same, the result is slightly better, as only one edge is missing – I believe. Many errors in edge direction.

    DeepSeek V3 – PromptB

    From chat.deepseek.com, non paying subscription.

    digraph G {
    subgraph cluster_Environment {
        label="The Environment";
        World -> Grad [label="in"];
        Grad -> Lisen [label="has"];
        Lisen -> Machine [label="Interacts with"];
        Machine -> builds [label="interacts with"];
    }

    subgraph cluster_Team {
        label="The Team";
        Sensemaking -> Engineer [label="heads"];
    }

    // Define nodes not in subgraphs
    builds [label="builds"];
}
    DeepSeekV3- PromptB – Result1

    Assessment:

    Too many errors – I stopped counting.
    For the first time the labels are wrong. And a node gets the name of an edge. This is far worse than previous LLMs.

    Mistral AI Le Chat

    From https://chat.mistral.ai, non paying subscription.

    digraph G {
    // Define subgraphs
    subgraph cluster_Environment {
        label = "The Environment";
        World;
        Goal;
        User;
        Machine;
    }

    subgraph cluster_Team {
        label = "The Team";
        Sensemaking;
        Engineer;
    }

    // Define edges within The Environment subgraph
    World -> Machine [label="interacts with"];
    World -> Goal [label="in"];
    Goal -> World [label="is in"];
    Goal -> User [label="has"];
    User -> Machine [label="interacts with"];
    Machine -> cluster_Team [label="builds"];

    // Define edges within The Team subgraph
    Sensemaking -> Engineer [label="needs"];
}
    MistralAI_LeChat – PromptB – Result1

    I counted nine semantic errors, which is poor.

    Conclusion

    The question was whether LLMs available today support the conversion of a simple graph into its textual representation.

    In short, yes. Reasonably.

    Here my learning:

    • Some LLMs available today can perform the task, with the best model producing only one essential semantic error on the initial prompt (Claude – PromptB – Result1 above).
    • Claude 3.5 Sonnet ranks best, ChatGPT 4o second. Some LLMs perform poorly.
    • The prompt matters. A step-by-step approach improved the results.
    • Most errors occur during semantic extraction of edges – particularly in determining their direction.
    • Nodes, subgraphs, and labels were generally recognized accurately, except by DeepSeek V3.
    • The translation of these elements into the Graphviz syntax works also almost perfectly.
    • In other words, LLMs are freeing me from tedious transcription and syntax issues.
    • This is a tiny step in a huge domain, there would be much more to explore, e.g. larger graphs and different diagrams.
    • More efficient and reliable tools likely exist. One LLM suggested a multi-step process involving image processing, text recognition, and graph reconstruction, using Python libraries for OCR, node, and edge detection.
    • I’m excited to explore future applications, such as using LLMs to generate conceptual models from existing knowledge in free text.

    I look forward to your comments on the LinkedIn post. You can also find more information on my blog: https://recognizing.ch

    #AI, #LLM, #Modelling, #Graphviz, #Engineering

  • Why Conceptual Modelling?

    Have you ever sat through a meeting where engineers talk past each other? Where requirements feel like checkboxes to tick rather than insights to share?

    Following up on our previous post, where we’ve explored the barriers to adoption, I will show how Conceptual Modelling contributes to engineering, with what role for requirements engineers, and why it matters. Let us first introduce the Engineering context.

    Engineering: Building Machines for User Goals in the World

    Diagram showing how Engineering is driven by user goals in the world. 
Grapviz script:
digraph G {
    edge[dir=back];
    style=filled;
    color=lightgrey;
    node [style=filled,color=white];
    compound=true;
    subgraph cluster_world {
        World -> Machine -> User[label="interacts with  " ];
        World -> User[label="is in" ];
        World -> Goal[label="in" ];
        Goal -> User [label="has" ];
        label = "The world";
    }
    subgraph cluster_disc {
        Engineering;
        label = "The disciplines";
    }
    Machine -> Engineering[label="builds"];
    Goal -> Engineering [label="is driven by" ];
}

    In The World and the Machine, Michael Jackson writes, […] we are engineers because we make useful machines. We are concerned both with the world, in which the machine serves a useful purpose, and with the machine itself. [Jackson, 1995].

    Having spent the past 12 years at Sonova, I’ll be illustrating this post with hearing devices. The User is the wearer of the hearing devices, with her or his family and audiologist. The Machine is the pair of hearing devices and their companion apps. The World includes the user’s conversation partners, mobile phone, the audiologist’s equipment, the rain… The main Goal is to enjoy social interactions. Last, Engineering, the discipline of building Machines for Users in the World, gathers stakeholders besides R&D like Product Marketing, Usability Engineering, Manufacturing, Procurement. I’ll refer to them all either as stakeholders or Engineers.

    To succeed, engineers must understand not just the hearing devices but also the user’s World. This brings us to Sensemaking.

    Sensemaking: Making Sense of the World

    To build the right Machine, Engineers need to make sense of the World and the Machine as they interact. I refer to this process as Sensemaking.

    Diagram showing how Sensemaking is concerned with the World.
Here the graphviz script:
digraph WhySenseMaking {
    edge[dir=back];
    style=filled;
    color=lightgrey;
    node [style=filled,color=white];
    compound=true;
    subgraph cluster_world {
        World -> Machine -> User[label="interacts with" ];
        World -> User[label="is in" ];
        World -> Goal[label="in" ];
        Goal -> User [label="has" ];
        label = "The world";
    }
    subgraph cluster_Stakeholders {
        Sensemaking -> Engineer [label="needs" ];
        label = "The team";
    }
    Engineer -> Machine [dir=forward label="builds"];
    World -> Sensemaking [label="of" ltail="cluster_world"];
}

    Consider our hearing devices. Engineers must make sense of many aspects, including:

    • How diverse is the morphology of human ears?
    • Do hearing device users wear bicycle helmets and sunscreen on rainy days?
    • Is hearing involved in speaking? How do hearing devices affect own speech production?

    Sensemaking is the collective cognitive process of apprehending the World – discovering, naming, questioning, structuring and sharing; agreeing and disagreeing; by the team and for the team.

    Requirements Engineering: Facilitating Sensemaking

    Requirements Engineering (RE) is often seen as bureaucratic burden caused by regulatory obligations. This perception is unfortunate as it prevents RE practitioners to deliver value – for example when they are not invited to early design meetings – and because it spoils the fun, driving Requirements Engineers to chase trendier titles like Project Manager.

    Document stack - from  Freepik.com
    Freepik.com

    I have experienced the essential value RE can contribute, which I want to share.

    Requirements Engineering is the engineering discipline concerned with the real-world goals for, functions of, and constraints on machines [Zave, 1995].

    Klaus Pohl breaks this down into three dimensions: understanding, agreement, and documentation(*):

    Klaus Pohl's 3 dimenstions purpose of Requirements Engineering understanding, agreement, and documentation.
    Requirements engineering is a cooperative, iterative, and incremental process which aims at ensuring that:
    (1) All relevant requirements are explicitly known and understood at the required level of detail.
    (2) A sufficient agreement about the system requirements is achieved between the stakeholders involved.
    (3) All requirements are documented and specified in compliance with the relevant documentation/specification formats and rules. [Pohl, 2010].

    I argue that the essential purpose of requirements engineering is Sensemaking—facilitating the cognitive journey of the project stakeholders towards understanding and agreement, about the World, and about the Machine as it interacts with its context.

    Requirements engineers are best suited to support the team in its Sensemaking journey – especially with their expert toolbox: Conceptual Modelling.

    Conceptual modelling: The Toolbox for Sensemaking

    Let’s first define Modelling.

    Modeling in its broadest sense is the cost−effective use of something in place of something else for some purpose. It allows us to use something that is simpler, safer, or cheaper than reality instead of reality for some purpose [Rothenberg, 1989] 

    In our engineering context, Conceptual Modelling is the process of developing an abstracted representation of the Machine, the World, their interfaces, including the Users and their Goals, often depicted using graphical notations.

    With this set of diagrams, Conceptual Modelling helps humans to think together. Here is how:

    1. Conceptual Modelling reduces cognitive load

    Studies show that, for complex and interconnected concepts, humans process the information presented as visual diagramatic representations faster than textual sentential representations [Larkin & Simon, 1987].
    Consider the two forms of the same semantic network below. How much effort to make sense of them?

    digraph TheWorldAndTheMachine {
    subgraph cluster_world {
    World -> Machine -> User[label="interacts with" ];
    World -> User[label="is in" ];
    World -> Goal[label="in" ];
    Goal -> User [label="has" ];
    label = "The world";
    }
    subgraph cluster_disciplines {
    Engineering;
    label = "The disciplines";
    }
    Machine -> Engineering[label="builds"];
    Goal -> Engineering [label="is driven by" ];
    }
    Diagram showing how Engineering is driven by user goals in the world. 
Grapviz script:
digraph G {
    edge[dir=back];
    style=filled;
    color=lightgrey;
    node [style=filled,color=white];
    compound=true;
    subgraph cluster_world {
        World -> Machine -> User[label="interacts with  " ];
        World -> User[label="is in" ];
        World -> Goal[label="in" ];
        Goal -> User [label="has" ];
        label = "The world";
    }
    subgraph cluster_disc {
        Engineering;
        label = "The disciplines";
    }
    Machine -> Engineering[label="builds"];
    Goal -> Engineering [label="is driven by" ];
}

    2. Conceptual Modelling enforces focus

    Each model (or modelling language) focuses on a particular aspect of reality, excluding all others. By choosing a model, participants force themselves to focus on this particular aspect. As example, modern musical notation excels at representing western melodies, and it is very bad for most other uses.

    Compared to natural language, this reduction of freedom is a major simplification as it reduces the dimensions of the problem space, therefore preventing ambiguity and misunderstanding. This focus also enables considerable more precision. Can you imagine Mozart writing his music in German?

    3. Conceptual Modelling fosters multiplicity of viewpoints

    Using multiple diagrams in a Conceptual Modelling framework encourages stakeholders to consider multiple perspectives, fostering deeper and more disciplined thinking.

    When applied effectively, these three qualities—reduced cognitive load, focus, and multiplicity of viewpoints—accelerate, simplify, and enrich collective thinking, understanding, and problem-solving.

    Conceptual modelling is, therefore, the perfect toolbox for sensemaking.

    Conceptual Modelling in Real Life

    Following up on my hearing device engineering example, at Sonova, using a model called causality diagram, we have recently explored top user pains related to their hearing. In small groups, with multi discipline experts from audiology to electroacoustic, we have systematically investigated and modeled the causal relationships between user goals, the involved system performances, and the device technologies and functions.

    digraph CausalityDiagram {
    rankdir="LR";
    edge[dir=back];
    style=filled;
    color=lightgrey;
    node [style=filled,color=white];
    compound=true;
    subgraph cluster_Needs {
        OwnVoiceQ;
        label = "User Needs";
    }
    subgraph cluster_SysPerf {
        Occlusion;
        PerfP;
        label = "Syst Perfs";
    }
    subgraph cluster_Techno {
        TransdTechT;
        FeatureF;
        label = "Tech. and features";
    }
    OwnVoiceQ -> Occlusion[label="alters"];
    OwnVoiceQ -> PerfP[label="improves"];
    Occlusion -> TransdTechT [label="increase" ];
    PerfP -> FeatureF [label="increase" ];
    Occlusion -> FeatureF [label="decrease" ];
}
    Example causality diagram – “Am I satisfied with my own voice?”

    The resulting causality diagrams, the working sessions to get there, and the sharing session with the larger team have been focused, energizing, enjoyable and mind stretching, revealing new trade-offs and dependencies to the larger community.

    Our Framework

    I will go through the details of our Easy Framework for Conceptual Modelling in further posts. For now, let’s conclude our quest for the role and rationale for Conceptual Modelling.

    The Big Picture

    The big picture, adding RE and Conceptual modelling to previous views. Graphviz:

digraph G {
    edge[dir=back];
    style=filled;
    color=lightgrey;
    node [style=filled,color=white];
    compound=true;
    subgraph cluster_world {
        World -> Machine -> User[label="interacts with" ];
        World -> User[label="is in" ];
        World -> Goal[label="in" ];
        Goal -> User [label="has" ];
        label = "The world";
    }
    subgraph cluster_Stakeholders {
        Sensemaking -> Engineer [label="needs" ];
        label = "The team";
    }
    subgraph cluster_disc {
        Engineering -> RequirementsEngineering[label="is part of" ];
        RequirementsEngineering  -> ConceptualModelling[label="toolbox for" ];
        label = "The disciplines";
    }
    Machine -> Engineer[label="builds"];
    Sensemaking -> RequirementsEngineering[label="purpose is" ];
    Goal -> Engineering [label="is driven by" ];
    User -> Sensemaking  [label="of" ltail="cluster_world"];
    Engineer -> ConceptualModelling [dir=forward label="thinks with" ];
    RequirementsEngineering  -> Engineer[dir=forward label="serves" ];
}

    Conceptual Modelling is not about drawing nice diagrams—it’s about facilitating collective cognition as Engineers make sense of the World, therefore building Machines which truly serve their Users.

    Stay tuned as we dig into our Easy Framework for Conceptual Modelling! And I’d love to get your views and experience as reply to my article duplicated on LinkedIn.

    Sources

    [Jackson, 1995] M. Jackson, “The world and the machine,” doi: 10.1145/225014.225041.

    [Zave, 1995] P. Zave, “Classification of research efforts in requirements engineering,” doi: 10.1109/ISRE.1995.512563.

    [Pohl, 2010] K. Pohl, “Requirements engineering: fundamentals, principles, and techniques.” Heidelberg ; New York: Springer, 2010.

    [Rothenberg, 1989] J. Rothenberg, “The Nature of Modeling.”

    [Larkin & Simon, 1987] “Why a Diagram is (Sometimes) Worth Ten Thousand Words

    (*) I’ve renamed “content” to “understanding” because I claim this is the real purpose. Klaus, I’d love to chat about that with you one day.

  • An Easy Framework for Conceptual Modelling

    Throughout my career, I’ve faced a recurring challenge: convincing fellow engineers to embrace conceptual modelling in early-phase engineering. Whether as a foundation for requirements documents, or as a vehicle for domain sensemaking with the team, the value of conceptual modelling is widely recognized—but it rarely gets done.

    Why is that?

    The problem lies in the complexity of the practice:

    PlantUML Syntax: 
@startmindmap
+ Rarely gets done, why?
++ Vastness of Modelling
++ Modelling the Wrong Thing
++ One Size Does NOT Fit All
++ Confusing the Phases
@endmindmap

    1. Vastness of Modelling:
      Models are everywhere. We use them daily to make sense of the world. But this abundance can be a handicap. Without structure and constraints, modelling efforts lack channelling and direction towards repeatable outcomes.
    2. Modelling the Wrong Thing:
      Modelling is, obviously, abstract. Without strong constraints, I’ve often seen the modeller focusing on the feature instead of the machine, or on stakeholders instead of actors. Or mixing a state for a function. Once you’re in such a trap, whatever your efforts, you end up with confusion, and distrust for the practice.
    3. One Size Does NOT Fit All:
      The tools and techniques suitable for banking applications may not work for medical devices. We need to recognize and name the scope for which our sensemaking effort and conceptual modelling toolbox is applicable. While I recognize the huge value of the IREB initiative gathering it all in one book, I envision something narrower, focused on the market-driven, software-intensive device engineering domain I know, ready to use out of the box.
    4. Confusing the Phases:
      While models serve from early prototyping to detailed design, their purposes may differ. I’ve seen modelling technique misapplied at the wrong engineering phase, again leading to confusion and distrust in the discipline. I’ll strictly consider the machine as a “black box in the world”.

    The Easy Framework for Conceptual Modelling

    Based on my practice in market-driven, software-intensive device engineering, I want to explore these issues and share my Easy Framework for Conceptual Modelling—the pragmatic toolkit I’ve been growing over the years.

    The Easy Framework for Conceptual Modelling – www.freepik.com

    This framework classically includes:

    • A structured set of models, offering a multi faceted viewpoints on both the world and the machine
    • For the purpose of sensemaking with the team – which I claim is the essential purpose of requirements engineering
    • Each model following an established modelling language
    • Their rationale, practical guidance and examples

    I’m also intrigued by the potential of large language models to assist in transforming existing knowledge – e.g. your company’s internal wiki – into such models. How far can AI tools enhance the process? Paper publication is exploding, I’ll dig into it.

    I’ll share my learning and insights here on recognizing.ch, and sample milestones on LinkedIn.

    Now, what do you think ? Do you recognize the problem? Do see value in this exploration? Gaps we should focus on? Or do you feel like joining the effort? I’d love to hear your thoughts!

  • Where am I writing from

    Here where I am writing from – what I’ve done and what I know.

    PlantUML Syntax: @startmindmap
<style> 
node {MaximumWidth 280} 
</style>
+ Discipline
++_ Requirements Engineering
++_ Systems Engineering
+++ in the “INCOSE” sense, not MS365 or Azure Cloud…
@endmindmap ” usemap=”#plantuml_map”></p>



<h2 class=

    PlantUML Syntax: @startmindmap
<style> 
node {MaximumWidth 280} 
</style>
+ Product\ntypology
++_ Market driven software intensive ecosystems
++_ One\n by\n one
+++_ Market\nDriven
++++ Where future clients are not aware of the product at the time it is engineered.
+++_ Software\nintensive
++++ Where software accounts for a major part of the R&D costs
+++_ Eco\nsystems
++++ Where the engineered **Machine** interacts with other machines.
++++_ Machine
+++++ An engineered object with a purpose in the world. Does not need to have hardware, but the actual interface with humans is tangible. Machines are often part of **Product families** 
++++_ Product\nfamilies
+++++_ where\n variability\n is involved
++++++ variability in time, with successive **versions** released over time
++++++ variability in space, with (possibly simultaneous) **variants** of the machine
@endmindmap ” usemap=”#plantuml_map”></p>



<h2 class=

    PlantUML Syntax: @startmindmap
<style> 
node {MaximumWidth 280} 
</style>
+ Business domain
++_ Medical Devices – hearing devices & their ecosystem
+++ Sonova AG
++_ Mobile phone industry
+++ Motorola
+++ Mitsubishi Electrics
++_ Telecommunications
+++ Alcatel
++_ Various customers e.g.
+++ Renault’s Techno centre
+++ SNCF
+++ Ecole Centrale Paris
+++ Univesite Paris 1 Pantheon Sorbonne
+++ Polarion
@endmindmap ” usemap=”#plantuml_map”></p>



<h2 class=

  • Why REcognizing.ch

    In this blog I explore sensemaking and conceptual modelling in Requirements Engineering – in the context of market driven software intensive product engineering.

    What ?

    • sensemaking is the process of representing knowledge about the candidate problem and solution so that engineers can think, understand each others, agree, verify… so that they can engineer.
    • conceptual modelling is the toolbox which enables sensemaking

    Why ?

    • Because I believe the mission of cognitively enabling the engineering teams is the core purpose of requirements engineering (RE). “Cognitively enabling” means here “delivering the concepts with which people think together”
    • Because sensemaking is where Requirements Engineers maximize the value they deliver. This is the part of the job which “makes sense” (in various dimensions here). This is the part that is fun.
    • For employers and RE team managers, sensemaking is attractiveness, motivation and retention. This is investing in the future.
    • Because, although good conceptual models ultimately look simple and obvious, I have observed they are often poorly realized and therefore disregarded. Conceptual modelling is “an art [..] which requires apprenticeship” [Bernhard Thalheim].
    • Because, as automation (Knowledge Graphs, ML, AI, ..) is freeing (human) software engineers from repetitive tasks towards higher level of abstractions, from writing code towards model-driven engineering, conceptual modelling will play a key role interfacing humans and machine while they make sense of “what we do want to build, and why”. This is me being excited.
    • Because conceptual models, with both their formality and their quality of being grasped in a short glimpse, may become handy when exchanging knowledge with machines – not only in the context of engineering.

    In short, a bunch of topics I feel delighted to dive into.

    Another why ?

    • Because, after twelve years leading Requirements Engineering at Sonova (see my LinkedIn page), I am looking for my next occupation, and I feel like exploring what matters to me is both fulfilling, “sense-making” :-), and likely to increases my chances of reaching out to organizations which will need my services .

    I hope you’ll enjoy, feel free to contact me and react.