1. Introduction
Functional decomposition (FD) is a keystone of engineering design, enabling the systematic breakdown of complex systems into their constituent functions (Pahl et al., Reference Schlattmann and Seibel2007; VDI 2221-1, 2019; Schlattmann & Seibel, Reference Pahl, Beitz, Feldhusen and Grote2021). It plays an important role in the early stages of product development by fostering a clearer understanding of system operations and facilitating the generation of innovative solutions. However, FD traditionally relies on expert knowledge and manual effort, making it resource-intensive and subject to variability. Automating this process could improve consistency, efficiency, and accessibility, especially as engineering challenges become increasingly complex.
Advances in artificial intelligence, particularly in large language models (LLMs), have introduced new possibilities for automating tasks that require structured reasoning and semantic understanding. LLMs have demonstrated their capability to perform complex problem-solving (Hao et al., Reference Hao, Gu, Luo, Liu, Shao, Wang and Hu2024) and generate structured outputs in various domains (Yao et al. Reference Yao, Wang, Zhang, Qin, Zhang, Chu and Mei2024). Despite their success, the application of LLMs to structured engineering tasks—such as FD—remains underexplored. Previous research using planning algorithms to automate FD (Rosenthal et al., Reference Rosenthal, Demke, Mantwill and Niggemann2024) has shown promise but has also highlighted the need for more flexible and domain-specific reasoning approaches.
This paper investigates the use of LLMs for FD, focusing on GPT-4o and its enhancement through the proposed Monte Carlo tree search for functional decomposition (MCTS-FD) algorithm. This approach combines Roth’s functional taxonomy (Roth, Reference Roth2000) with semantic graph-based reasoning and iterative refinement, providing a robust framework for automating FD. The methodology is evaluated in a study both qualitatively, by comparing the outputs from GPT-4o and GPT-4o supported by MCTS-FD against those of graduate engineering students, and quantitatively, using metrics that assess structural integrity, semantic correctness, and error rates. The paper highlights the potential of LLMs in automating FD and demonstrates their ability to generate outputs comparable to human expertise.
2. State of the art
2.1. Functions and function structures
According to VDI 2222-1 (1997), a function describes the general relationship between the input, the internal, and the output variables of a technical system to fulfil a task. To avoid predetermined problem solutions and allow for creative flexibility, functions should be formulated in a simple and abstract way (preferably as noun plus verb).
Looking at a technical product as a whole, its main purpose can be described by one (or more) overall function(s). The overall function is typically represented in the form of a “black box” that shows changes in material, energy, and information flows from input to output. Figure 1 shows the overall function of a coffee grinder with input and output flows as an example.

Figure 1. Example of the overall function for a coffee grinder (Schlattmann & Seibel, Reference Schlattmann and Seibel2021)
To automate the creation of functions, general formulations such as “grinding of coffee beans” are not of great use, as theoretically an infinite number of functions could be formulated in this way. For this reason, Roth (Reference Roth2000) introduced the general functions “store,” “guide,” “convert,” and “transform” for the material, energy, and information flows, as well as “add” and “separate” to describe the interactions within and between these three flows. This results in a total of 30 general functions that can be used to formulate any functional activity of a technical product. An example of a general function formulation is “add information to energy” instead of “actuate switch”.
The overall function is typically broken down into sub-functions, which are smaller, more manageable tasks. There is no fixed or mandatory division—how sub-functions are divided depends on the specific problem. It is recommendable to sort sub-functions into input, internal, and output in a tabular form. An example of a function list for a coffee grinder is shown in Table 1.
Table 1. Example of a function list for a coffee grinder (Schlattmann & Seibel, Reference Schlattmann and Seibel2021)

To work out the relationships between the individual sub-functions, they can be represented in a circuit diagram with arrows indicating the flows of material, energy, and information. These function structures provide a clear understanding of how the sub-functions interact and can be modified through changes in sequence, arrangement, or feedback mechanisms to explore alternative solutions. There exists neither a “correct” nor a “complete” function structure (at most, an incorrect one). It is also not possible to specify all function structures for any overall function.
2.2. Automated functional decomposition
2.2.1. Planning approach
Rosenthal et al. (Reference Rosenthal, Demke, Mantwill and Niggemann2024) propose utilizing the similarities between FD in design and planning in artificial intelligence (AI). In design, functions are characterized as state transitions, much like how actions in AI planning manipulate states. By formulating FD as a planning problem, suitable AI planning algorithms can be applied to find useful function structures. Particularly, the paper uses Roth’s (Reference Roth2000) catalogue of functions, which consists of 30 general functions related to material, energy, and information flows, as building blocks for the planning process.
The core method involves representing the FD problem using the planning domain definition language (PDDL)—a formal language for AI planning tasks (Russell & Norvig, Reference Russell and Norvig2021). A modified partial order planning (POP) algorithm is applied to solve the decomposition problem, enabling automatic generation of function structures.
To evaluate the approach, two case studies are presented: a coffeemaker and an ATM. In both cases, the planning algorithm successfully decomposed the product functions into subfunctions, demonstrating the effectiveness of the method. However, the algorithm still requires an experienced design engineer who must define or determine the input and output functions beforehand.
2.2.2. Other approaches
Automated FD is a growing field in engineering design, aimed at breaking down complex systems into structured, functional representations. Tools such as the Python package AutoFunc have automated this process by utilising data from design repositories to create standardized functional models (Mikes et al., Reference Mikes, Edmonds, Stone and DuPont2021). However, AutoFunc is no longer actively maintained, limiting its future application.
The procedural rule-based functional modelling structure (PRFS) presents an algorithmic approach that starts with customer requirements, aligns input/output flows, and utilizes grammar rules to hierarchically decompose functions (Mohammed & Shammari, Reference Mohammed and Shammari2021). This approach ensures consistency, compliance with physical laws, and modular structuring, enhancing concept generation while addressing the rigidity of traditional methods, which excel in reverse engineering but struggle with new product development (Zhang et al. Reference Zhang, Tor, Britton and Deng2002).
To advance functional modeling, data-driven methods such as association rule learning (e.g., the Apriori algorithm) have been developed to identify component-function-flow (CFF) relationships (Tensa et al., Reference Tensa, Edmonds, Ferrero, Mikes, Zurita, Stone and DuPont2019; Edmonds et al., 2020). These methods leverage data from design repositories to identify patterns among components, functions, and flows, automating functional modeling and improving consistency.
Further refinement is achieved through text-mining methods that quantify interactions between modules in FD. Techniques such as topic modelling and cosine similarity are used to evaluate the interdependence of modules, enhancing the precision and accuracy of functional models (Murphy et al., Reference Murphy, Banks, Nagel and Linsey2023).
2.3. Large language models
Large language models like GPT-4o, Llama 3.3, and Qwen 2.5 represent significant advancements in natural language processing (NLP). These models are trained on vast datasets, allowing them to capture nuanced linguistic patterns and knowledge, and have demonstrated proficiency across a wide range of tasks—from language generation to machine translation and question-answering (Brown et al., Reference Devlin, Chang, Lee and Toutanova2020; Devlin et al., Reference Brown, Mann, Ryder, Subbiah, Kaplan, Dhariwal and Amodei2019). The Transformer architecture (Vaswani et al., Reference Vaswani, Shazeer, Parmar, Uszkoreit, Jones, Gomez and Polosukhin2017) is foundational to most LLMs. It employs self-attention, allowing the model to manage long-range dependencies, making it suitable for generating coherent and contextually accurate text.
Interacting effectively with LLMs requires a structured approach that utilizes their strengths in language comprehension and contextual generation. Ideal interaction strategies focus on clear prompts, iterative refinement, and use of specific guiding techniques to maximize the model’s effectiveness in producing accurate, relevant, and contextually appropriate responses. The latter involves providing the necessary background information to inform the model of the desired focus or domain. Approaches such as few-shot prompting have been shown to enhance the model’s performance when generating domain-specific responses (Brown et al., Reference Brown, Mann, Ryder, Subbiah, Kaplan, Dhariwal and Amodei2020). In this context, few-shot prompting involves providing examples within the prompt to illustrate the desired answer structure, which can improve the model’s consistency.
3. Methodology
Functional decomposition in technical product development involves breaking down complex systems into smaller, manageable functions to streamline the design process. This methodology aligns well with LLM capabilities, as these models can process complex information hierarchically, making connections between sub-functions and overarching goals.
While interpreting FD as a planning problem provides a valuable perspective, it is not the only way this process can be understood. Functional decomposition can also be effectively framed as knowledge graph construction (Zhong et al., Reference Zhong, Wu, Li, Peng and Wu2023), a domain where LLMs excel. Knowledge graphs represent structured information by organizing entities (nodes) and their relationships (edges), making them well-suited for modelling function structures. In a similar way, FD involves identifying sub-functions (nodes) and their interdependencies (edges), creating a hierarchical and relational representation of a product’s operation.
Reformulating FD as knowledge graph construction highlights the importance of capturing relationships and dependencies, rather than solely sequencing actions as in planning. In this context, functions are not just steps in a plan, but conceptual nodes interconnected by a web of material, energy, and information flows. The decomposition process becomes a process of knowledge organization and semantic mapping, tasks for which LLMs are uniquely qualified due to their ability to comprehend and generate structured, relationship-rich outputs.
3.1. Few-shot learning and prompt engineering
To facilitate the LLM in effectively performing FD, a prompt was designed following the principles of few-shot learning. Few-shot learning leverages the pre-trained knowledge of the LLM and enables it to generalize from a small number of examples embedded within the prompt itself. The prompt was crafted to provide clear instructions for the task while including illustrative examples to showcase the expected structure of outputs. The entire prompt is provided in Appendix A.
The prompt begins with a structured introduction, providing definitions for all 21 general functions that are used from Roth’s framework.Footnote 1 The definitions serve as a reference for the LLM, ensuring a consistent understanding of the FD principles. By incorporating Roth’s taxonomy, the prompt provides a clear and structured framework that guides the model’s reasoning, aligning its outputs with established theoretical foundations.
The second part of the prompt defines clear rules for applying Roth’s general functions in creating FDs. The rules ensure logical consistency, adherence to Roth’s framework, and clarity in representation. The third part of the prompt incorporates few-shot learning, providing the LLM with detailed examples of FDs for three distinct products: a coffee machine with grinder, a Bluetooth speaker, and a toaster. These examples act as templates to guide the LLM in accurately applying Roth’s general functions. The prompt concludes with a task instructing the LLM to generate a general FD for a given product and represent it in a machine-readable JSON format, adhering to the style and structure of the provided examples.
3.2. Proposed algorithm
We implemented the Monte Carlo tree search for functional decomposition algorithm, an adaptation of the Monte Carlo tree self-refine (MCTSr) framework, which was originally designed to improve LLM performance in reasoning tasks. MCTSr integrates Monte Carlo tree search (MCTS) with iterative self-refinement and self-evaluation mechanisms, where the solutions are iteratively refined using feedback generated by the LLM and evaluated through model-based scoring functions. This iterative procedure involves navigating a search tree, where nodes represent possible solutions and edges denote transitions between refinements, guided by a balance between exploration and exploitation achieved through upper confidence bound (UCB) optimization (Zhang et al., Reference Zhang, Huang, Zhou, Li and Ouyang2024). MCTS-FD adapts this structure by replacing the original self-refinement and self-evaluation components with processes specifically designed for the graph properties of FD. The complete algorithm is provided at GitHub.Footnote 2
Instead of relying on LLM-generated feedback, the refinement step in MCTS-FD inspects the FD graph for semantic errors, such as incorrectly configured nodes (e.g., an “add” function with only one input), or structural issues, such as disconnected components. Based on these detections, refinement suggestions are generated to correct these errors, ensuring both semantic and structural integrity. The evaluation step replaces LLM scoring with graph-based metrics, assigning positive points for complexity (e.g., number of nodes and edges) and penalizing semantic inconsistencies and disconnected subgraphs. This approach anchors the refinement and evaluation processes in objective graph-theoretical principles, ensuring that the resulting FDs are not only accurate but also computationally interpretable and optimized for further analysis. By employing MCTS for iterative refinement and graph-based scoring, MCTS-FD combines the exploratory strengths of the original MCTSr framework with domain-specific criteria for FD.
3.3. Qualitative analysis design
The qualitative analysis was structured across three sessions to evaluate the interpretability, relevance, and performance of FDs created by graduate students, GPT-4o, and GPT-4o enhanced by the proposed MCTS-FD algorithm. This design integrated collaborative learning with a blinded evaluation process to ensure unbiased feedback. In the first session, students were introduced to the principles of FD through a lecture and discussion. Examples of FDs were shown, explaining their structure, purpose, and common errors. In the second session, students were divided into teams of 2–3 participants and assigned the task of creating a function structure for a product of their choice. In the third session, students participated in a blinded evaluation using LimeSurvey to grade a set of FDs. The decompositions evaluated included those created by other student groups, GPT-4o, and GPT-4o + MCTS-FD.
3.4. Quantitative analysis design
The quantitative analysis evaluated the ability of various models to perform FD across a dataset of 50 products. Outputs were generated by Llama 3.1 (8B, 70B, and 405B), GPT-4o, and GPT-4o enhanced by MCTS-FD. The evaluation focused on structural integrity, semantic accuracy, error distribution, and system complexity to assess the models’ adherence to FD principles. Structural integrity is measured by the “fully connected” count, which identifies decompositions with no disconnected components, and the “total nodes” and “total edges” counts, reflecting the overall complexity. Semantic accuracy is assessed through the “error-free” count, indicating decompositions without semantic or structural errors, and the “correct format” (in %), showing the percentage of outputs conforming to the required graph structure. To capture system complexity, the analysis also calculates the average number of nodes and edges per system, reflecting the decomposition richness, alongside the average errors per system, quantifying the frequency of issues.
4. Results and discussion
4.1. Qualitative analysis
Of the 20 decompositions submitted by student groups, eight were excluded due to significant errors or incompleteness, suggesting that the respective groups were unable to effectively complete the task. The remaining 12 decompositions were considered usable and included in the analysis alongside the outputs from GPT-4o and GPT-4o + MCTS-FD. The metrics analysed were completeness, logic, and creativity. The Likert responses are illustrated in Figure 2.

Figure 2. Likert responses in the qualitative analysis
In terms of completeness, GPT-4o recorded the highest mean score (3.96), with the students following closely behind (3.91), while GPT-4o + MCTS-FD had the lowest score (3.53). For logic, GPT-4o again achieved the highest mean score (4.00), indicating that its outputs were consistently well-structured and closely aligned with FD principles. The students scored slightly lower (3.87), also reflecting good logical consistency. GPT-4o + MCTS-FD followed with a mean score of 3.69, displaying competitive logical quality, although not reaching the level of GPT-4o alone or the top-performing student groups.
When evaluating creativity, the students achieved the highest score (3.48), demonstrating their ability to incorporate innovative and diverse ideas into their FDs. GPT-4o + MCTS-FD ranked second with a score of 3.32, slightly outperforming GPT-4o (3.12).
4.2. Quantitative analysis
The quantitative analysis evaluated the structural and semantic accuracy of FDs across outputs generated by Llama models (8B, 70B, 405B), GPT-4o, and GPT-4o enhanced with MCTS-FD. Metrics such as average nodes and average edges per system, error rates, as well as structural properties such as correct format, full connectivity, and error-free decompositions were analysed across 50 products. The dataset was created by algorithmically evaluating the output of 50 FDs per model, utilizing the graph structure. The results of the quantitative analysis are summarized in Table 2.
Table 2. Results of the quantitative analysis

The results illustrate that both GPT-4o and GPT-4o + MCTS-FD achieved perfect formatting, while the Llama models exhibited lower success rates. In terms of structural complexity, Llama 8B produced the most complex graphs, followed by Llama 405B and Llama 70B. GPT-4o generated simpler graphs and GPT-4o + MCTS-FD produced the least complex ones.
Error analysis showed that Llama 8B had the highest error rate, while GPT-4o and GPT-4o + MCTS-FD exhibited significantly fewer errors, with MCTS-FD having the lowest rate. In terms of connectivity and error-free outputs, MCTS-FD outperformed GPT-4o, which, in turn, performed significantly better than the Llama models, with Llama 8B demonstrating the poorest performance in both connectivity and error-free results.
4.3. Comparative example
Figure 3 shows a comparison of FDs of a waffle maker, created with GPT-4o + MCTS-FD (on the left) and Llama 8B (on the right). Llama 8B employs significantly more nodes than GPT-4o + MCTS-FD, resulting in a more chaotic FD with numerous intersecting connections. Furthermore, Llama 8B’s FD contains several semantic errors, such as an incorrect number of connections per node, which disrupts logical flow and coherence. In contrast, GPT-4o + MCTS-FD shows a clear and organized flow. Its FD is semantically correct, with each node having the correct number of connections, ensuring a consistent and logical representation of the waffle-making process. While GPT-4o + MCTS-FD provides a single, well-defined output, Llama 8B includes two additional output elements that are incorrectly connected, further highlighting a lack of clarity and understanding of functionality.

Figure 3. Comparison of waffle maker FDs: GPT-4o + MCTS-FD (left) vs. Llama 8B (right)
4.4. Discussion
The results of this study demonstrate the capability of LLMs, particularly GPT-4o and its enhancement with MCTS-FD, to generate FDs with a level of accuracy and coherence comparable to human experts. While both the qualitative and quantitative analyses highlight distinct strengths and weaknesses across the models, GPT-4o + MCTS-FD stands out as the most reliable and effective approach overall.
The qualitative results indicate that GPT-4o and its enhancement by MCTS-FD are highly competitive with graduate engineering students in terms of completeness and logical consistency. The high scores achieved by GPT-4o in these areas underscore its ability to produce well-structured and thorough FDs. GPT-4o + MCTS-FD, though slightly behind in qualitative metrics, illustrates substantial improvement over pure GPT-4o in creativity, thanks to its reasoning-driven iterative refinement process. Despite these strengths, the students outperformed all models in creativity, highlighting the continued importance of human intuition and domain knowledge in generating innovative and diverse FDs.
In the quantitative analysis, GPT-4o + MCTS-FD significantly outperforms both the considered Llama models and pure GPT-4o, achieving the highest scores in error-free outputs (42 %) and fully connected graphs (72 %). It exhibits more than a twofold improvement over pure GPT-4o, which achieved 16 % error-free outputs and 44 % connectivity. The iterative refinement mechanism in MCTS-FD enables it to identify and correct semantic inconsistencies and structural errors more effectively than pure LLMs. The Llama models, while creating the most complex graphs in terms of nodes and edges, suffer from high error rates and poor connectivity.
A trend observed in the quantitative analysis is the correlation between the number of parameters in the Llama models and their performance across various metrics, with larger models surpassing the smaller ones. As the model size increases, significant improvements are observed in structural complexity and semantic accuracy. Llama 405B, the largest model, consistently outperformed its smaller counterparts in metrics such as correct formatting (86 %), connectivity (40 %), and error-free outputs (10 %). These results highlight the growing capabilities of larger LLMs to produce more coherent and well-structured outputs, likely due to their enhanced capacity to capture complex patterns during pretraining.
However, while the improvements from 8B to 70B and 405B are notable, the performance gains from scaling are not entirely proportional to the increase in the number of parameters. For example, Llama 405B’s error-free output rate of 10 %, an improvement over 8B’s 0 %, does not reflect the dramatic leap one might expect from a model with substantially more parameters. These findings suggest that while scaling up parameters improves performance, additional strategies, such as fine-tuning or architectural modifications, may be necessary to fully exploit the potential of larger models.
5. Conclusion and outlook
This work showcases the potential of automating FD using LLMs, particularly GPT-4o enhanced with the MCTS-FD algorithm. By combining Roth’s general functions with semantic graph-based methods, this approach highlights the capability of LLMs to generate FDs that are not only comparable to human efforts in qualitative aspects such as completeness and logic but also exceed them quantitatively in terms of error-free outputs and graph connectivity.
Formulating FD as a knowledge graph construction strengthens its connection to planning problems by providing a structured and interpretable representation of the underlying relationships. The knowledge graph generated through FD serves as a foundation for planning algorithms, enabling efficient analysis, validation, and optimization of the decomposition. This approach effectively maps FD into the planning domain, emphasizing the complementary nature of these perspectives. Building on previous work that utilized Roth’s general functions and planning algorithms (Rosenthal et al., Reference Rosenthal, Demke, Mantwill and Niggemann2024), this study expands Roth’s framework by incorporating semantic graph-based reasoning methods, which replace traditional PDDL representations with flexible and iterative graph-based techniques, improving the adaptability of FD while preserving its structured principles. This integration underscores the synergistic relationship between knowledge graph construction and planning-based approaches.
Despite these advances, some challenges remain. Handling creative, non-standard FDs, integrating non-functional requirements, and accounting for domain-specific interdependencies are aspects that require further development. Furthermore, such as with previous approaches, this work requires preprocessing steps to align inputs with Roth’s standards and ensure consistent graph representations, highlighting the importance of standardization in FD tasks.
Future work should focus on expanding the capabilities of semantic graph-based reasoning methods to address more specialized and complex FD problems. Exploring the application of MCTS-FD in design assistance systems and its potential to integrate human input for balancing creativity and precision are promising directions. Additionally, studying how interdependencies, such as physical effects, principal solutions, and requirement modelling, can be more effectively incorporated into the FD process will be crucial. Further exploration of alternative algorithms and their performance in graph-based FD systems could also provide valuable insights.
Acknowledgement
We would like to thank the participants of the master course “AI-supported Product Development” from Leuphana University Lüneburg for taking part in the qualitative analysis.
Appendix A. Prompt details for GPT-4o + MCTS-FD
### General Functional Decomposition ###
Store (Material): The object or material that is required for the process.
Store (Energy): The energy source required for operation.
Store (Information): Any necessary control data or user inputs.
Guide (Material): The material is directed to its appropriate destination.
Guide (Energy): Energy is transmitted through the system (e.g., electricity through wires, heat through a conductor).
Guide (Information): Data or instructions are transmitted to control the process.
Convert (Material): Change the form or state of the material.
Convert (Energy): Convert energy from one form to another (e.g., electrical to thermal energy).
Convert (Information): Transform data or signals into a new format or structure.
Add Material to Material: Add material to another material (e.g., combining ingredients, assembling components).
Add Energy to Material: Add energy to a material (e.g., heating, cooling, or applying pressure).
Add Information to Material: Introduce control signals or data to a material system (e.g., printing data onto a material).
Add Energy to Energy: Add energy to another energy source (e.g., amplifying power).
Add Information to Energy: Embed data into an energy signal (e.g., modulating radio waves).
Add Information to Information: Add one set of data to another (e.g., combining datasets).
Separate Material from Material: Isolate or remove one material from another (e.g., filtering, distillation).
Separate Energy from Material: Remove energy from material (e.g., cooling down, dissipating heat).
Separate Information from Material: Extract or isolate data from a physical medium (e.g., reading a barcode).
Separate Material from Energy: Remove material from energy (e.g., extracting solid objects from a radiation field).
Separate Information from Energy: Extract data from an energy form (e.g., demodulating a signal).
Separate Information from Information: Isolate or filter specific data from a larger dataset (e.g., noise filtering in communication).
### Rules to Build Functional Decomposition ###
- The input should not contain trivial Information such as settings or start process; ignore data as input if not otherwise stated or necessary.
- Convert only if it is a function by itself, not if it is part of Add or Separate.
- No optional output and inputs, just necessary ones.
- If something is used for two elements such as electricity for movement and heat, it has to be separated before, and then two separate transformations have to be performed: Separate Energy → Guide (Electricity) + Guide (Electricity).
- Come up with the input that should be there as Store (Input).
- Everything should converge to Store (Final Output) whatever it is for the product.
- Make sure to space the elements in the visualization well so they do not overlap.
- Use \n or multiple \n to break long function names.
### General Functional Decomposition for Coffee Machine with Grinder in JSON ###
{'functions': {'Store\n(Coffee Beans)': (0, 4), 'Store\n(Water)': (1, 4), 'Store\n(Electricity)': (2, 4), 'Guide\n(Coffee Beans to Grinder)': (0, 3), 'Guide\n(Water to Brewer)': (1, 3), 'Guide\n(Electricity to Grinder)': (2, 3), 'Guide\n(Electricity to Heater)': (3, 3), 'Convert\n(Electricity to Mechanical Energy)': (2, 2), 'Convert\n(Electricity to Thermal Energy)': (3, 2), 'Add Energy to Material\n(Mechanical Energy to Coffee Beans)': (0, 1), 'Add Energy to Material\n(Thermal Energy to Water)': (1, 1), 'Separate Material from Material\n(Brewed Coffee from Grounds)': (1, 0), 'Store\n(Brewed Coffee)': (0, -1), 'Store\n(Used Grounds)': (2, -1)}, 'energy_connections': [('Store\n(Electricity)', 'Guide\n(Electricity to Grinder)'), ('Store\n(Electricity)', 'Guide\n(Electricity to Heater)'), ('Guide\n(Electricity to Grinder)', 'Convert\n(Electricity to Mechanical Energy)'), ('Guide\n(Electricity to Heater)', 'Convert\n(Electricity to Thermal Energy)'), ('Convert\n(Electricity to Mechanical Energy)', 'Add Energy to Material\n(Mechanical Energy to Coffee Beans)'), ('Convert\n(Electricity to Thermal Energy)', 'Add Energy to Material\n(Thermal Energy to Water)')], 'material_connections': [('Store\n(Coffee Beans)', 'Guide\n(Coffee Beans to Grinder)'), ('Store\n(Water)', 'Guide\n(Water to Brewer)'), ('Guide\n(Coffee Beans to Grinder)', 'Add Energy to Material\n(Mechanical Energy to Coffee Beans)'), ('Guide\n(Water to Brewer)', 'Add Energy to Material\n(Thermal Energy to Water)'), ('Add Energy to Material\n(Mechanical Energy to Coffee Beans)', 'Separate Material from Material\n(Brewed Coffee from Grounds)'), ('Add Energy to Material\n(Thermal Energy to Water)', 'Separate Material from Material\n(Brewed Coffee from Grounds)'), ('Separate Material from Material\n(Brewed Coffee from Grounds)', 'Store\n(Brewed Coffee)'), ('Separate Material from Material\n(Brewed Coffee from Grounds)', 'Store\n(Used Grounds)')], 'information_connections': []}
### General Functional Decomposition for Bluetooth Speaker in JSON ###
{'functions': {'Store\n(Electricity)': (0, 4), 'Store\n(Bluetooth Signal)': (1, 4), 'Guide\n(Electricity to Components)': (0, 3), 'Guide\n(Bluetooth Signal to Receiver)': (1, 3), 'Convert\n(Bluetooth Signal to Digital Audio Signal)': (1, 2), 'Convert\n(Digital to Analog Audio Signal)': (1, 1), 'Add Information to Energy\n(Audio Signal with Amplified Power)': (0.5, 1), 'Convert\n(Electrical Energy to Sound)': (0.5, 0)}, 'energy_connections': [('Store\n(Electricity)', 'Guide\n(Electricity to Components)'), ('Guide\n(Electricity to Components)', 'Add Information to Energy\n(Audio Signal with Amplified Power)'), ('Add Information to Energy\n(Audio Signal with Amplified Power)', 'Convert\n(Electrical Energy to Sound)')], 'information_connections': [('Store\n(Bluetooth Signal)', 'Guide\n(Bluetooth Signal to Receiver)'), ('Guide\n(Bluetooth Signal to Receiver)', 'Convert\n(Bluetooth Signal to Digital Audio Signal)'), ('Convert\n(Bluetooth Signal to Digital Audio Signal)', 'Convert\n(Digital to Analog Audio Signal)'), ('Convert\n(Digital to Analog Audio Signal)', 'Add Information to Energy\n(Audio Signal with Amplified Power)')], 'material_connections': []}
### General Functional Decomposition for Toaster in JSON ###
{'functions': {'Store\n(Bread)': (0, 5), 'Store\n(Electricity)': (1, 5), 'Store\n(User Settings)': (2, 5), 'Guide\n(Bread to Toasting Chamber)': (0, 4), 'Guide\n(Electricity to Heating Element)': (1, 4), 'Guide\n(User Settings to Control System)': (2, 4), 'Add Information to Energy\n(User Settings and Electricity)': (1.5, 3), 'Convert\n(Electricity to Thermal Energy)': (1, 2), 'Add Energy to Material\n(Thermal Energy to Bread)': (0, 2), 'Store\n(Toasted Bread)': (0, 0)}, 'energy_connections': [('Store\n(Electricity)', 'Guide\n(Electricity to Heating Element)'), ('Guide\n(Electricity to Heating Element)', 'Add Information to Energy\n(User Settings and Electricity)'), ('Add Information to Energy\n(User Settings and Electricity)', 'Convert\n(Electricity to Thermal Energy)'), ('Convert\n(Electricity to Thermal Energy)', 'Add Energy to Material\n(Thermal Energy to Bread)')], 'material_connections': [('Store\n(Bread)', 'Guide\n(Bread to Toasting Chamber)'), ('Guide\n(Bread to Toasting Chamber)', 'Add Energy to Material\n(Thermal Energy to Bread)'), ('Add Energy to Material\n(Thermal Energy to Bread)', 'Store\n(Toasted Bread)')], 'information_connections': [('Store\n(User Settings)', 'Guide\n(User Settings to Control System)'), ('Guide\n(User Settings to Control System)', 'Add Information to Energy\n(User Settings and Electricity)')]}
Build a General Functional Decomposition and structure it in JSON like in the examples provided: [PRODUCT]