Blog

  • Graph to Text with AI

    In my journey exploring Easy Conceptual Modelling, I want to understand how AI can help. In this post I’ll look into whether a general-purpose LLM can translate a simple graph into its textual representation. Here my sample input graph.

    The graph as image.
    Sample hand drawn graph

    The answer is – spoiler alert – yes. On its initial response, one such LLM – Claude 3.5 Sonnet – has produced a textual script with only one essential semantic error. It took three more prompt rounds for the perfect output below.

    digraph G {
    compound=true;
    edge[dir=back];
    
    // Define subgraph for The Environment
    subgraph cluster_environment {
        label="The Environment";
        style=solid;
        
        World;
        Goal;
        User;
        Machine;
        
        // Edges within The Environment
        World -> Goal [label="in"];
        Goal -> User [label="has"];
        Machine -> User [label="interacts with"];
        World -> User [label="is in"];
        World -> Machine [label="interacts with"];
    }
    
    // Define subgraph for The Team
    subgraph cluster_team {
        label="The Team";
        style=solid;
        
        Sensemaking;
        Engineer;
        
        // Edges within The Team
        Sensemaking -> Engineer [label="needs"];
    }
    
    // Cross-subgraph edges
    World -> Sensemaking [label="of", ltail=cluster_environment];
    Machine -> Engineer [label="builds"];
}
    Claude – PromptB – Result4


    OK, nothing groundbreaking, feel free to skip and leave unless you’re interested in the details.

    Why I am looking into this?

    I believe that AI and LLMs will relieve engineers of repetitive and time-consuming tasks, allowing engineers to focus on conceptual work, like Sensemaking and Conceptual Modelling, which are the focus of my blog.

    Going one step further, I wanted to check how AI can save conceptual modelers from tedious tasks – here translating a graph into its textual representation. Without claiming this is the best way – on the contrary, there are likely more efficient methods – I found this would be a nice way to start evaluating AI.

    In the rest of the post

    I will:

    • introduce the Graphviz graph visualization software;
    • explain the benchmarking procedure
    • show the result for a few standard LLMs
    • and conclude with what we’ve learned

    What is Graphviz?

    Graphviz is an open source graph visualization software. See https://graphviz.org/. A graph, like the ones shown above, has a set of nodes, which may be grouped into subgraphs, both of them may be linked by edges. DOT is the plain text graph description language that serves as input for Graphviz. Graphviz renders DOT text into visual diagrams. Below an example Graphviz DOT text.
    Apart from my hand drawing, all graph images in this post have been rendered by Graphviz.

    How to see the Graphviz scripts?

    On my blog’s version of this post, I have included the Graphviz script as HTML ‘title’ attribute for each image, which is you can view as follows:

    • On Android Chrome and Firefox, a long press on the image displays a pop up with a partial ‘title’ attribute. On Chrome, tapping the attribute again reveals the full text.
    • On both Firefox and Chrome on my Linux laptop the ‘Title attribute’ pops up when I hover my mouse on the picture.

    The LinkedIn version of this article may not offer this possibility.

    What is AI

    AI stands for Artificial Intelligence. Large Language Models (LLMs) are AIs that can process and generate text. I’ll evaluate the following LLMs:

    • Claude 3.5 Sonnet (free version)
    • ChatGPT GPT-4o (free version – short name for GPT-4 Omni)
    • DeepSeek V3 (free version)
    • Mistral AI – Le Chat (free version)
    • Perplexity (payed subscription) with various options.

    I ran these queries between 6 and 10 February 2025.

    Benchmark procedure

    I’ll first phrase a prompt requesting the LLM to convert a graphical representation of a graph into the Graphviz DOT syntax.

    Then, for each LLM, starting from a blank session:

    1. I’ll prompt the LLM as above;
    2. I’ll render the output script with GraphvizOnline;
    3. I’ll upload the resulting image on this post (see above How to see the Graphviz scripts).
    4. I’ll assess briefly the result;
    5. I may attempt to fix the issues;

    Until I conclude the article.

    Assessment Criteria

    I’ll assess the quality of the conversion if the rendered graph:

    1. is semantically correct – that is, the nodes, subgraphs, edges; their labels; and the direction, source & target of the edges match the source graph.
      Here I’ll distinguish between:
      • essential errors – the transcription of the graph into the meaningful list of elements listed above has errors;
      • accidental errors – the translation of the list above into Graphviz syntax has errors.
    2. is readable (by a human)
    3. is understandable (by a human)
    4. reasonably respects the representation conventions of the input image (for example arrows going upwards).

    The Target

    Here is the target graph in Graphviz DOT syntax.
    With “Target” I mean the version, written by me manually, which I consider would be an excellent result if it would be generated by an LLM.

    digraph WhySenseMaking {
      edge[dir=back];
      compound=true;
      subgraph cluster_environment {
        World -> Machine -> User[label="interacts with" ];
        World -> User[label="is in" ];
        World -> Goal[label="in" ];
        Goal -> User [label="has" ];
        label = "The Environment";
      }
      subgraph cluster_Stakeholders {
        Sensemaking -> Engineer [label="needs" ];
        label = "The Team";
      }
      Machine -> Engineer [label="builds"];
      World -> Sensemaking [label="of" ltail= "cluster_environment"];
    }

    Which GraphvizOnline renders as:

    digraph WhySenseMaking {
  edge[dir=back];
  compound=true;
  subgraph cluster_environment {
    World -> Machine -> User[label="interacts with" ];
    World -> User[label="is in" ];
    World -> Goal[label="in" ];
    Goal -> User [label="has" ];
    label = "The Environment";
  }
  subgraph cluster_Stakeholders {
    Sensemaking -> Engineer [label="needs" ];
    label = "The Team";
  }
  Machine -> Engineer [label="builds"];
  World -> Sensemaking [label="of" ltail= "cluster_environment"];
}
    Target graph – a Graphviz script I have written as reference.

    PromptA

    My initial prompt (PromptA) was:

    Attached the image of a graph. Please convert this image into Graphviz DOT syntax.

    Claude 3.5 Sonnet – PromptA

    Result:

    digraph G {
  "World" -> "Goal" [label="in"];
  "Goal" -> "User" [label="has"];
  "User" -> "World" [label="is in"];
  "User" -> "Machine" [label="interacts with"];
  "Machine" -> "World" [label="interacts with"];
  "Engineer" -> "Machine" [label="builds"];
  "Sensemaking" -> "Engineer" [label="needs"];
  "Sensemaking" -> "Machine" [label="of"];

  subgraph cluster_0 {
    label = "The Environment";
    "World";
    "Goal";
    "User";
    "Machine";
  }

  subgraph cluster_1 {
    label = "The Team";
    "Sensemaking";
    "Engineer";
  }
}
    Claude – PromptA – Result1

    Assessment:

    • Four semantic errors:
      • 3 essential errors: wrong edge directions
      • 1 accidental error: The edge “Sensemaking of” wrongly targets “Machine” instead of the subgraph “The Environment”.
    • readable, understandable, respect of the conventions: not perfect. Some wriggling edges.

    Even if, for a first simple prompt, I find this first result surprisingly good, I am not satisfied and I suspect my prompt needs more guidance, so I rework it.

    PromptB

    In this new prompt I am much more explicit to the LLM about each step it has to perform – requesting it lists, as intermediate steps, all graph elements, especially the edges, their labels and directions.

    Attached the image of a graph. Please convert this image into Graphviz DOT syntax with following steps:

    1. list all subgraphs and their names;
    2. for each subgraph, list each contained nodes and their names;
    3. for each node, identify each outgoing edges;
      3.1 For each such edge, list the edge label, the source node, the target entity, and tag whether the target entity is a node or a subgraph;
    4. generate the corresponding Graphviz DOT syntax script.

    Claude 3.5 Sonnet – PromptB

    Here the result of PromptB (see above How to see the Graphviz scripts?):

    digraph G {
    compound=true;
    rankdir=LR;
    
    // Define subgraph for The Environment
    subgraph cluster_environment {
        label="The Environment";
        style=solid;
        
        World;
        Goal;
        User;
        Machine;
        
        // Edges within The Environment
        Goal -> World [label="in"];
        User -> Goal [label="has"];
        User -> Machine [label="interacts with"];
        Machine -> World [label="interacts with"];
    }
    
    // Define subgraph for The Team
    subgraph cluster_team {
        label="The Team";
        style=solid;
        
        Sensemaking;
        Engineer;
        
        // Edges within The Team
        Engineer -> Sensemaking [label="needs"];
    }
    
    // Cross-subgraph edges
    Sensemaking -> cluster_environment [label="of", lhead=cluster_environment];
    Engineer -> Machine [label="builds"];
}
    Claude – PromptB – Result1

    Assessment:

    • Two semantic errors:
      • 1 essential error – that is, one error in the graph analysis: the “is in” edge between User and World is missing.
        I find this is good and very encouraging.
      • 1 accidental error – that is, one transcription error into the Graphviz syntax;
        • The “of” edge from “Sensemaking” wrongly targets a node called “cluster_environment”. This is due to Graphviz’s complex syntax for pointing to subgraphs. In my tests, no LLM has managed to succeed this syntax.
    • Readable, understandable, respect of the conventions: Not yet OK. I want my vertical bottom to top input view. And the arrows pointing up.

    Fixing the errors

    I’ll be here taking the LLM by the hand and point at the errors one by one, explicitly and step by step. For this script size It would be faster for me to do it by hand – the LLM is no longer helping.

    Here the first correcting prompt:

    Please fix the two following errors:
    1rst fix: The syntax from the “of” edge from “Sensemaking” to the subgraph “The Environment” is wrong. The DOT syntax of Graphviz is somewhat peculiar here: you need to set a node as target, within the intended target subgraph. Here best node would be “World”. So the corresponding DOT script would be: /// Sensemaking -> World [label=”of”, lhead=cluster_environment]; ///

    2nd fix: Please add the missing edge /// User -> World [label=”is in”]; ///

    Below the result – the LLM has fixed the two semantic errors as prompted. No semantic errors:

    digraph G {
    compound=true;
    rankdir=LR;
    
    // Define subgraph for The Environment
    subgraph cluster_environment {
        label="The Environment";
        style=solid;
        
        World;
        Goal;
        User;
        Machine;
        
        // Edges within The Environment
        Goal -> World [label="in"];
        User -> Goal [label="has"];
        User -> Machine [label="interacts with"];
        User -> World [label="is in"];
        Machine -> World [label="interacts with"];
    }
    
    // Define subgraph for The Team
    subgraph cluster_team {
        label="The Team";
        style=solid;
        
        Sensemaking;
        Engineer;
        
        // Edges within The Team
        Engineer -> Sensemaking [label="needs"];
    }
    
    // Cross-subgraph edges
    Sensemaking -> World [label="of", lhead=cluster_environment];
    Engineer -> Machine [label="builds"];
}
    Claude – PromptB – Result2


    I now want to improve the rendering – my target is to reproduce the input image. Here the new prompt:

    Cool, this is now semantically perfect. Please do following decoration changes:

    1. remove the /// rankdir=LR; /// statement since I want the diagram to show vertically;
    2. now I’d like to vertically flip the orientation of the graph, so that the User and Engineer nodes are at the bottom, and all the arrows are pointing upwards. To achieve this result, please do the following:
      • First, you revert the direction of pointing of all arrows. To do this, please insert the directive /// edge[dir=back]; /// at the top of the DOT script;
      • Then, please revert the source and target elements of all edges. As example, /// User -> Goal [label=”has”]; /// shall become /// Goal -> User [label=”has”]; ///

    Having the tool take care of reverting source & target on all edges saves significant time. The result is almost perfect:

    digraph G {
    compound=true;
    edge[dir=back];
    
    // Define subgraph for The Environment
    subgraph cluster_environment {
        label="The Environment";
        style=solid;
        
        World;
        Goal;
        User;
        Machine;
        
        // Edges within The Environment
        World -> Goal [label="in"];
        Goal -> User [label="has"];
        Machine -> User [label="interacts with"];
        World -> User [label="is in"];
        World -> Machine [label="interacts with"];
    }
    
    // Define subgraph for The Team
    subgraph cluster_team {
        label="The Team";
        style=solid;
        
        Sensemaking;
        Engineer;
        
        // Edges within The Team
        Sensemaking -> Engineer [label="needs"];
    }
    
    // Cross-subgraph edges
    World -> Sensemaking [label="of", lhead=cluster_environment];
    Machine -> Engineer [label="builds"];
}
    Claude – PromptB – Result3

    Lastly, it became confused again with the cumbersome syntax for pointing at subgraphs AND with reverted edges.

    New prompt:

    One last bit, please replace “lhead” by “ltail” in the script line
    /// World -> Sensemaking [label=”of”, lhead=cluster_environment]; ///

    digraph G {
    compound=true;
    edge[dir=back];
    
    // Define subgraph for The Environment
    subgraph cluster_environment {
        label="The Environment";
        style=solid;
        
        World;
        Goal;
        User;
        Machine;
        
        // Edges within The Environment
        World -> Goal [label="in"];
        Goal -> User [label="has"];
        Machine -> User [label="interacts with"];
        World -> User [label="is in"];
        World -> Machine [label="interacts with"];
    }
    
    // Define subgraph for The Team
    subgraph cluster_team {
        label="The Team";
        style=solid;
        
        Sensemaking;
        Engineer;
        
        // Edges within The Team
        Sensemaking -> Engineer [label="needs"];
    }
    
    // Cross-subgraph edges
    World -> Sensemaking [label="of", ltail=cluster_environment];
    Machine -> Engineer [label="builds"];
}
    Claude – PromptB – Result4

    Assessment:

    • zero semantic errors
    • readable, understandable: perfect.
    • respect of the conventions: excellent.
    • But – I have needed 3 correcting prompt rounds

    Here the source image for comparison:

    Sample hand drawn graph


    Let’s move to next LLM.

    Perplexity with GPT-4 Omni – PromptB

    digraph Graph {
    // Define subgraph for "The Environment"
    subgraph cluster_Environment {
        label = "The Environment";
        World [label="World"];
        Goal [label="Goal"];
        User [label="User"];
        Machine [label="Machine"];
    }

    // Define subgraph for "The Team"
    subgraph cluster_Team {
        label = "The Team";
        Sensemaking [label="Sensemaking"];
        Engineer [label="Engineer"];
    }

    // Edges within and between subgraphs
    World -> Goal [label="in"];
    World -> Machine [label="interacts with"];
    Goal -> User [label="has"];
    User -> Goal [label="is in"];
    Machine -> World [label="interacts with"];
    Machine -> Sensemaking [label="of"];
    Sensemaking -> Engineer [label="needs"];
    Engineer -> Machine [label="builds"];
}
    Perplexity-GPT-4 Omni – PromptB – Result1

    Assessment:

    • One syntax error – noticeably, this is the single time. The digraph name “Graph” is invalid because it’s a reserved keyword in the language. Changing to name “G”, I can render the graph.
    • 6 semantic errors – all with edges (as most of the case).

    With too many errors, I move on to the next LLM.

    Perplexity, Deepseek R1 – Prompt B

    digraph G {
    // Subgraph for The Environment
    subgraph cluster_Environment {
        label = "The Environment";
        World [label="World"];
        Goal [label="Goal"];
        User [label="User"];
        Machine [label="Machine"];

        // Edges within The Environment
        World -> Goal [label="in"];
        World -> Machine [label="interacts with"];
        Goal -> World [label="is in"];
        Goal -> User [label="has"];
        Machine -> World [label="interacts with"];
    }

    // Subgraph for The Team
    subgraph cluster_Team {
        label = "The Team";
        Sensemaking [label="Sensemaking"];
        Engineer [label="Engineer"];

        // Edges within The Team
        Sensemaking -> Engineer [label="needs"];
    }

    // Edges between subgraphs
    Sensemaking -> Machine [label="of"];
}
    Perplexity-DeepSeekR1- PromptB – Result1

    I count eight semantic errors, all involving edges.
    Too many errors, I move on to next LLM.

    GPT-4o – PromptB

    From chatgpt.com, no paying subscription. GPT-4o means “GPT-4 Omni”.

    digraph G {
    // Subgraph: The Environment
    subgraph cluster_environment {
        label = "The Environment";
        World;
        Goal;
        User;
        Machine;

        // Edges in The Environment
        World -> Goal [label="interacts with"];
        Goal -> World [label="is in"];
        Goal -> User [label="has"];
        User -> Goal [label="is in"];
        Machine -> World [label="interacts with"];
        Machine -> User [label="interacts with"];
    }

    // Subgraph: The Team
    subgraph cluster_team {
        label = "The Team";
        Sensemaking;
        Engineer;

        // Edges in The Team
        Sensemaking -> Engineer [label="needs"];
        Engineer -> Machine [label="builds"];
    }
}
    GPT-4o- PromptB – Result1

    Assessment:

    I count about six semantic errors. I wonder if an LLM could be used to automate error-counting – perhaps another time.

    Next day, same GPT-4o LLM, same PromptB:

    digraph G {
    subgraph cluster_Environment {
        label = "The Environment";
        World;
        Goal;
        User;
        Machine;
        
        World -> Goal [label="in"];
        Goal -> User [label="has"];
        User -> Machine [label="interacts with"];
        Machine -> World [label="interacts with"];
    }
    
    subgraph cluster_Team {
        label = "The Team";
        Sensemaking;
        Engineer;
        
        Sensemaking -> Engineer [label="needs"];
    }
    
    Engineer -> Machine [label="builds"];
    Machine -> Sensemaking [label="of"];
}
    GPT-4o- PromptB – Result2

    Although the number of errors is about the same, the result is slightly better, as only one edge is missing – I believe. Many errors in edge direction.

    DeepSeek V3 – PromptB

    From chat.deepseek.com, non paying subscription.

    digraph G {
    subgraph cluster_Environment {
        label="The Environment";
        World -> Grad [label="in"];
        Grad -> Lisen [label="has"];
        Lisen -> Machine [label="Interacts with"];
        Machine -> builds [label="interacts with"];
    }

    subgraph cluster_Team {
        label="The Team";
        Sensemaking -> Engineer [label="heads"];
    }

    // Define nodes not in subgraphs
    builds [label="builds"];
}
    DeepSeekV3- PromptB – Result1

    Assessment:

    Too many errors – I stopped counting.
    For the first time the labels are wrong. And a node gets the name of an edge. This is far worse than previous LLMs.

    Mistral AI Le Chat

    From https://chat.mistral.ai, non paying subscription.

    digraph G {
    // Define subgraphs
    subgraph cluster_Environment {
        label = "The Environment";
        World;
        Goal;
        User;
        Machine;
    }

    subgraph cluster_Team {
        label = "The Team";
        Sensemaking;
        Engineer;
    }

    // Define edges within The Environment subgraph
    World -> Machine [label="interacts with"];
    World -> Goal [label="in"];
    Goal -> World [label="is in"];
    Goal -> User [label="has"];
    User -> Machine [label="interacts with"];
    Machine -> cluster_Team [label="builds"];

    // Define edges within The Team subgraph
    Sensemaking -> Engineer [label="needs"];
}
    MistralAI_LeChat – PromptB – Result1

    I counted nine semantic errors, which is poor.

    Conclusion

    The question was whether LLMs available today support the conversion of a simple graph into its textual representation.

    In short, yes. Reasonably.

    Here my learning:

    • Some LLMs available today can perform the task, with the best model producing only one essential semantic error on the initial prompt (Claude – PromptB – Result1 above).
    • Claude 3.5 Sonnet ranks best, ChatGPT 4o second. Some LLMs perform poorly.
    • The prompt matters. A step-by-step approach improved the results.
    • Most errors occur during semantic extraction of edges – particularly in determining their direction.
    • Nodes, subgraphs, and labels were generally recognized accurately, except by DeepSeek V3.
    • The translation of these elements into the Graphviz syntax works also almost perfectly.
    • In other words, LLMs are freeing me from tedious transcription and syntax issues.
    • This is a tiny step in a huge domain, there would be much more to explore, e.g. larger graphs and different diagrams.
    • More efficient and reliable tools likely exist. One LLM suggested a multi-step process involving image processing, text recognition, and graph reconstruction, using Python libraries for OCR, node, and edge detection.
    • I’m excited to explore future applications, such as using LLMs to generate conceptual models from existing knowledge in free text.

    I look forward to your comments on the LinkedIn post. You can also find more information on my blog: https://recognizing.ch

    #AI, #LLM, #Modelling, #Graphviz, #Engineering

  • Why Conceptual Modelling?

    Have you ever sat through a meeting where engineers talk past each other? Where requirements feel like checkboxes to tick rather than insights to share?

    Following up on our previous post, where we’ve explored the barriers to adoption, I will show how Conceptual Modelling contributes to engineering, with what role for requirements engineers, and why it matters. Let us first introduce the Engineering context.

    Engineering: Building Machines for User Goals in the World

    Diagram showing how Engineering is driven by user goals in the world. 
Grapviz script:
digraph G {
    edge[dir=back];
    style=filled;
    color=lightgrey;
    node [style=filled,color=white];
    compound=true;
    subgraph cluster_world {
        World -> Machine -> User[label="interacts with  " ];
        World -> User[label="is in" ];
        World -> Goal[label="in" ];
        Goal -> User [label="has" ];
        label = "The world";
    }
    subgraph cluster_disc {
        Engineering;
        label = "The disciplines";
    }
    Machine -> Engineering[label="builds"];
    Goal -> Engineering [label="is driven by" ];
}

    In The World and the Machine, Michael Jackson writes, […] we are engineers because we make useful machines. We are concerned both with the world, in which the machine serves a useful purpose, and with the machine itself. [Jackson, 1995].

    Having spent the past 12 years at Sonova, I’ll be illustrating this post with hearing devices. The User is the wearer of the hearing devices, with her or his family and audiologist. The Machine is the pair of hearing devices and their companion apps. The World includes the user’s conversation partners, mobile phone, the audiologist’s equipment, the rain… The main Goal is to enjoy social interactions. Last, Engineering, the discipline of building Machines for Users in the World, gathers stakeholders besides R&D like Product Marketing, Usability Engineering, Manufacturing, Procurement. I’ll refer to them all either as stakeholders or Engineers.

    To succeed, engineers must understand not just the hearing devices but also the user’s World. This brings us to Sensemaking.

    Sensemaking: Making Sense of the World

    To build the right Machine, Engineers need to make sense of the World and the Machine as they interact. I refer to this process as Sensemaking.

    Diagram showing how Sensemaking is concerned with the World.
Here the graphviz script:
digraph WhySenseMaking {
    edge[dir=back];
    style=filled;
    color=lightgrey;
    node [style=filled,color=white];
    compound=true;
    subgraph cluster_world {
        World -> Machine -> User[label="interacts with" ];
        World -> User[label="is in" ];
        World -> Goal[label="in" ];
        Goal -> User [label="has" ];
        label = "The world";
    }
    subgraph cluster_Stakeholders {
        Sensemaking -> Engineer [label="needs" ];
        label = "The team";
    }
    Engineer -> Machine [dir=forward label="builds"];
    World -> Sensemaking [label="of" ltail="cluster_world"];
}

    Consider our hearing devices. Engineers must make sense of many aspects, including:

    • How diverse is the morphology of human ears?
    • Do hearing device users wear bicycle helmets and sunscreen on rainy days?
    • Is hearing involved in speaking? How do hearing devices affect own speech production?

    Sensemaking is the collective cognitive process of apprehending the World – discovering, naming, questioning, structuring and sharing; agreeing and disagreeing; by the team and for the team.

    Requirements Engineering: Facilitating Sensemaking

    Requirements Engineering (RE) is often seen as bureaucratic burden caused by regulatory obligations. This perception is unfortunate as it prevents RE practitioners to deliver value – for example when they are not invited to early design meetings – and because it spoils the fun, driving Requirements Engineers to chase trendier titles like Project Manager.

    Document stack - from  Freepik.com
    Freepik.com

    I have experienced the essential value RE can contribute, which I want to share.

    Requirements Engineering is the engineering discipline concerned with the real-world goals for, functions of, and constraints on machines [Zave, 1995].

    Klaus Pohl breaks this down into three dimensions: understanding, agreement, and documentation(*):

    Klaus Pohl's 3 dimenstions purpose of Requirements Engineering understanding, agreement, and documentation.
    Requirements engineering is a cooperative, iterative, and incremental process which aims at ensuring that:
    (1) All relevant requirements are explicitly known and understood at the required level of detail.
    (2) A sufficient agreement about the system requirements is achieved between the stakeholders involved.
    (3) All requirements are documented and specified in compliance with the relevant documentation/specification formats and rules. [Pohl, 2010].

    I argue that the essential purpose of requirements engineering is Sensemaking—facilitating the cognitive journey of the project stakeholders towards understanding and agreement, about the World, and about the Machine as it interacts with its context.

    Requirements engineers are best suited to support the team in its Sensemaking journey – especially with their expert toolbox: Conceptual Modelling.

    Conceptual modelling: The Toolbox for Sensemaking

    Let’s first define Modelling.

    Modeling in its broadest sense is the cost−effective use of something in place of something else for some purpose. It allows us to use something that is simpler, safer, or cheaper than reality instead of reality for some purpose [Rothenberg, 1989] 

    In our engineering context, Conceptual Modelling is the process of developing an abstracted representation of the Machine, the World, their interfaces, including the Users and their Goals, often depicted using graphical notations.

    With this set of diagrams, Conceptual Modelling helps humans to think together. Here is how:

    1. Conceptual Modelling reduces cognitive load

    Studies show that, for complex and interconnected concepts, humans process the information presented as visual diagramatic representations faster than textual sentential representations [Larkin & Simon, 1987].
    Consider the two forms of the same semantic network below. How much effort to make sense of them?

    digraph TheWorldAndTheMachine {
    subgraph cluster_world {
    World -> Machine -> User[label="interacts with" ];
    World -> User[label="is in" ];
    World -> Goal[label="in" ];
    Goal -> User [label="has" ];
    label = "The world";
    }
    subgraph cluster_disciplines {
    Engineering;
    label = "The disciplines";
    }
    Machine -> Engineering[label="builds"];
    Goal -> Engineering [label="is driven by" ];
    }
    Diagram showing how Engineering is driven by user goals in the world. 
Grapviz script:
digraph G {
    edge[dir=back];
    style=filled;
    color=lightgrey;
    node [style=filled,color=white];
    compound=true;
    subgraph cluster_world {
        World -> Machine -> User[label="interacts with  " ];
        World -> User[label="is in" ];
        World -> Goal[label="in" ];
        Goal -> User [label="has" ];
        label = "The world";
    }
    subgraph cluster_disc {
        Engineering;
        label = "The disciplines";
    }
    Machine -> Engineering[label="builds"];
    Goal -> Engineering [label="is driven by" ];
}

    2. Conceptual Modelling enforces focus

    Each model (or modelling language) focuses on a particular aspect of reality, excluding all others. By choosing a model, participants force themselves to focus on this particular aspect. As example, modern musical notation excels at representing western melodies, and it is very bad for most other uses.

    Compared to natural language, this reduction of freedom is a major simplification as it reduces the dimensions of the problem space, therefore preventing ambiguity and misunderstanding. This focus also enables considerable more precision. Can you imagine Mozart writing his music in German?

    3. Conceptual Modelling fosters multiplicity of viewpoints

    Using multiple diagrams in a Conceptual Modelling framework encourages stakeholders to consider multiple perspectives, fostering deeper and more disciplined thinking.

    When applied effectively, these three qualities—reduced cognitive load, focus, and multiplicity of viewpoints—accelerate, simplify, and enrich collective thinking, understanding, and problem-solving.

    Conceptual modelling is, therefore, the perfect toolbox for sensemaking.

    Conceptual Modelling in Real Life

    Following up on my hearing device engineering example, at Sonova, using a model called causality diagram, we have recently explored top user pains related to their hearing. In small groups, with multi discipline experts from audiology to electroacoustic, we have systematically investigated and modeled the causal relationships between user goals, the involved system performances, and the device technologies and functions.

    digraph CausalityDiagram {
    rankdir="LR";
    edge[dir=back];
    style=filled;
    color=lightgrey;
    node [style=filled,color=white];
    compound=true;
    subgraph cluster_Needs {
        OwnVoiceQ;
        label = "User Needs";
    }
    subgraph cluster_SysPerf {
        Occlusion;
        PerfP;
        label = "Syst Perfs";
    }
    subgraph cluster_Techno {
        TransdTechT;
        FeatureF;
        label = "Tech. and features";
    }
    OwnVoiceQ -> Occlusion[label="alters"];
    OwnVoiceQ -> PerfP[label="improves"];
    Occlusion -> TransdTechT [label="increase" ];
    PerfP -> FeatureF [label="increase" ];
    Occlusion -> FeatureF [label="decrease" ];
}
    Example causality diagram – “Am I satisfied with my own voice?”

    The resulting causality diagrams, the working sessions to get there, and the sharing session with the larger team have been focused, energizing, enjoyable and mind stretching, revealing new trade-offs and dependencies to the larger community.

    Our Framework

    I will go through the details of our Easy Framework for Conceptual Modelling in further posts. For now, let’s conclude our quest for the role and rationale for Conceptual Modelling.

    The Big Picture

    The big picture, adding RE and Conceptual modelling to previous views. Graphviz:

digraph G {
    edge[dir=back];
    style=filled;
    color=lightgrey;
    node [style=filled,color=white];
    compound=true;
    subgraph cluster_world {
        World -> Machine -> User[label="interacts with" ];
        World -> User[label="is in" ];
        World -> Goal[label="in" ];
        Goal -> User [label="has" ];
        label = "The world";
    }
    subgraph cluster_Stakeholders {
        Sensemaking -> Engineer [label="needs" ];
        label = "The team";
    }
    subgraph cluster_disc {
        Engineering -> RequirementsEngineering[label="is part of" ];
        RequirementsEngineering  -> ConceptualModelling[label="toolbox for" ];
        label = "The disciplines";
    }
    Machine -> Engineer[label="builds"];
    Sensemaking -> RequirementsEngineering[label="purpose is" ];
    Goal -> Engineering [label="is driven by" ];
    User -> Sensemaking  [label="of" ltail="cluster_world"];
    Engineer -> ConceptualModelling [dir=forward label="thinks with" ];
    RequirementsEngineering  -> Engineer[dir=forward label="serves" ];
}

    Conceptual Modelling is not about drawing nice diagrams—it’s about facilitating collective cognition as Engineers make sense of the World, therefore building Machines which truly serve their Users.

    Stay tuned as we dig into our Easy Framework for Conceptual Modelling! And I’d love to get your views and experience as reply to my article duplicated on LinkedIn.

    Sources

    [Jackson, 1995] M. Jackson, “The world and the machine,” doi: 10.1145/225014.225041.

    [Zave, 1995] P. Zave, “Classification of research efforts in requirements engineering,” doi: 10.1109/ISRE.1995.512563.

    [Pohl, 2010] K. Pohl, “Requirements engineering: fundamentals, principles, and techniques.” Heidelberg ; New York: Springer, 2010.

    [Rothenberg, 1989] J. Rothenberg, “The Nature of Modeling.”

    [Larkin & Simon, 1987] “Why a Diagram is (Sometimes) Worth Ten Thousand Words

    (*) I’ve renamed “content” to “understanding” because I claim this is the real purpose. Klaus, I’d love to chat about that with you one day.

  • An Easy Framework for Conceptual Modelling

    Throughout my career, I’ve faced a recurring challenge: convincing fellow engineers to embrace conceptual modelling in early-phase engineering. Whether as a foundation for requirements documents, or as a vehicle for domain sensemaking with the team, the value of conceptual modelling is widely recognized—but it rarely gets done.

    Why is that?

    The problem lies in the complexity of the practice:

    PlantUML Syntax: 
@startmindmap
+ Rarely gets done, why?
++ Vastness of Modelling
++ Modelling the Wrong Thing
++ One Size Does NOT Fit All
++ Confusing the Phases
@endmindmap

    1. Vastness of Modelling:
      Models are everywhere. We use them daily to make sense of the world. But this abundance can be a handicap. Without structure and constraints, modelling efforts lack channelling and direction towards repeatable outcomes.
    2. Modelling the Wrong Thing:
      Modelling is, obviously, abstract. Without strong constraints, I’ve often seen the modeller focusing on the feature instead of the machine, or on stakeholders instead of actors. Or mixing a state for a function. Once you’re in such a trap, whatever your efforts, you end up with confusion, and distrust for the practice.
    3. One Size Does NOT Fit All:
      The tools and techniques suitable for banking applications may not work for medical devices. We need to recognize and name the scope for which our sensemaking effort and conceptual modelling toolbox is applicable. While I recognize the huge value of the IREB initiative gathering it all in one book, I envision something narrower, focused on the market-driven, software-intensive device engineering domain I know, ready to use out of the box.
    4. Confusing the Phases:
      While models serve from early prototyping to detailed design, their purposes may differ. I’ve seen modelling technique misapplied at the wrong engineering phase, again leading to confusion and distrust in the discipline. I’ll strictly consider the machine as a “black box in the world”.

    The Easy Framework for Conceptual Modelling

    Based on my practice in market-driven, software-intensive device engineering, I want to explore these issues and share my Easy Framework for Conceptual Modelling—the pragmatic toolkit I’ve been growing over the years.

    The Easy Framework for Conceptual Modelling – www.freepik.com

    This framework classically includes:

    • A structured set of models, offering a multi faceted viewpoints on both the world and the machine
    • For the purpose of sensemaking with the team – which I claim is the essential purpose of requirements engineering
    • Each model following an established modelling language
    • Their rationale, practical guidance and examples

    I’m also intrigued by the potential of large language models to assist in transforming existing knowledge – e.g. your company’s internal wiki – into such models. How far can AI tools enhance the process? Paper publication is exploding, I’ll dig into it.

    I’ll share my learning and insights here on recognizing.ch, and sample milestones on LinkedIn.

    Now, what do you think ? Do you recognize the problem? Do see value in this exploration? Gaps we should focus on? Or do you feel like joining the effort? I’d love to hear your thoughts!

  • Where am I writing from

    Here where I am writing from – what I’ve done and what I know.

    PlantUML Syntax: @startmindmap
<style> 
node {MaximumWidth 280} 
</style>
+ Discipline
++_ Requirements Engineering
++_ Systems Engineering
+++ in the “INCOSE” sense, not MS365 or Azure Cloud…
@endmindmap ” usemap=”#plantuml_map”></p>



<h2 class=

    PlantUML Syntax: @startmindmap
<style> 
node {MaximumWidth 280} 
</style>
+ Product\ntypology
++_ Market driven software intensive ecosystems
++_ One\n by\n one
+++_ Market\nDriven
++++ Where future clients are not aware of the product at the time it is engineered.
+++_ Software\nintensive
++++ Where software accounts for a major part of the R&D costs
+++_ Eco\nsystems
++++ Where the engineered **Machine** interacts with other machines.
++++_ Machine
+++++ An engineered object with a purpose in the world. Does not need to have hardware, but the actual interface with humans is tangible. Machines are often part of **Product families** 
++++_ Product\nfamilies
+++++_ where\n variability\n is involved
++++++ variability in time, with successive **versions** released over time
++++++ variability in space, with (possibly simultaneous) **variants** of the machine
@endmindmap ” usemap=”#plantuml_map”></p>



<h2 class=

    PlantUML Syntax: @startmindmap
<style> 
node {MaximumWidth 280} 
</style>
+ Business domain
++_ Medical Devices – hearing devices & their ecosystem
+++ Sonova AG
++_ Mobile phone industry
+++ Motorola
+++ Mitsubishi Electrics
++_ Telecommunications
+++ Alcatel
++_ Various customers e.g.
+++ Renault’s Techno centre
+++ SNCF
+++ Ecole Centrale Paris
+++ Univesite Paris 1 Pantheon Sorbonne
+++ Polarion
@endmindmap ” usemap=”#plantuml_map”></p>



<h2 class=

  • Why REcognizing.ch

    In this blog I explore sensemaking and conceptual modelling in Requirements Engineering – in the context of market driven software intensive product engineering.

    What ?

    • sensemaking is the process of representing knowledge about the candidate problem and solution so that engineers can think, understand each others, agree, verify… so that they can engineer.
    • conceptual modelling is the toolbox which enables sensemaking

    Why ?

    • Because I believe the mission of cognitively enabling the engineering teams is the core purpose of requirements engineering (RE). “Cognitively enabling” means here “delivering the concepts with which people think together”
    • Because sensemaking is where Requirements Engineers maximize the value they deliver. This is the part of the job which “makes sense” (in various dimensions here). This is the part that is fun.
    • For employers and RE team managers, sensemaking is attractiveness, motivation and retention. This is investing in the future.
    • Because, although good conceptual models ultimately look simple and obvious, I have observed they are often poorly realized and therefore disregarded. Conceptual modelling is “an art [..] which requires apprenticeship” [Bernhard Thalheim].
    • Because, as automation (Knowledge Graphs, ML, AI, ..) is freeing (human) software engineers from repetitive tasks towards higher level of abstractions, from writing code towards model-driven engineering, conceptual modelling will play a key role interfacing humans and machine while they make sense of “what we do want to build, and why”. This is me being excited.
    • Because conceptual models, with both their formality and their quality of being grasped in a short glimpse, may become handy when exchanging knowledge with machines – not only in the context of engineering.

    In short, a bunch of topics I feel delighted to dive into.

    Another why ?

    • Because, after twelve years leading Requirements Engineering at Sonova (see my LinkedIn page), I am looking for my next occupation, and I feel like exploring what matters to me is both fulfilling, “sense-making” :-), and likely to increases my chances of reaching out to organizations which will need my services .

    I hope you’ll enjoy, feel free to contact me and react.