Module migration

=============================================================================== SFPPy Module: Migration Solver =============================================================================== Implements a 1D finite-volume mass transfer solver (senspatankar()) for multilayer structures. Uses a modified Patankar scheme with exact solutions to handle partitioning at interfaces.

Main Components: - senspatankar() (Main solver) - Computes time evolution of a migrating substance in a multilayer structure - Supports Robin, impervious, and periodic boundary conditions - Stores simulation results in SensPatankarResult - SensPatankarResult (Stores simulation outputs) - Concentration profiles in packaging (Cx) and food (CF) - Time-dependent fluxes - Includes interpolation and visualization methods

Integration with SFPPy Modules: - Requires layer.py to define multilayer structures. - Uses food.py to set food contact conditions. - Relies on property.py for migration parameters (D, K). - Calls geometry.py when volume/surface area calculations are needed.

Example:

from patankar.migration import senspatankar
solution = senspatankar(multilayer, medium)
solution.plotCF()

=============================================================================== Details ===============================================================================

This module provides a solver (senspatankar()) to simulate in 1D the mass transfer of a substance initially distributed into a multilayer packaging structure (layer) into a contacting medium (foodlayer). It uses a finite-volume method adapted from the Patankar scheme to handle partition coefficients between all layers, as well as between the food and the contact layer (food is on the left). The right boundary condition is assumed impervious (no mass transfer at the right edge).

The numerical method has been published here: Nguyen, P.-M., Goujon, A., Sauvegrain, P. and Vitrac, O. (2013), A computer-aided methodology to design safe food packaging and related systems. AIChE J., 59: 1183-1212. https://doi.org/10.1002/aic.14056

The module offers : - methods to simulate mass transfer under various boudary conditions (Robin, impervious, periodic), - simulation chaining - result management (merging, edition…) - plotting and printing to disk capabilities

Classes

  • SensPatankarResult

Functions

  • senspatankar(multilayer, medium, t=None, autotime=True, timescale="sqrt", ntimes=1e4, RelTol=1e-4, AbsTol=1e-4)

Example


    from patankar.food import ethanol
    from patankar.layer import layer

    # Create medium and layers
    medium = ethanol()
    A = layer(layername="layer A")
    B = layer(layername="layer B")
    multilayer = A + B

    # Run solver
    sol = senspatankar(multilayer, medium)

    # Plot results
    sol.plotCF()
    sol.plotC()

@version: 1.24 @project: SFPPy - SafeFoodPackaging Portal in Python initiative @author: INRAE\olivier.vitrac@agroparistech.fr @licence: MIT @Date: 2022-01-17 @rev: 2025-03-05

Expand source code
#!/usr/bin/env python3
# -*- coding: utf-8 -*-

"""
===============================================================================
SFPPy Module: Migration Solver
===============================================================================
Implements a **1D finite-volume mass transfer solver (`senspatankar`)** for multilayer structures.
Uses a modified Patankar scheme with exact solutions to handle partitioning at interfaces.

**Main Components:**
- **`senspatankar`** (Main solver)
    - Computes time evolution of a migrating substance in a multilayer structure
    - Supports **Robin, impervious, and periodic** boundary conditions
    - Stores simulation results in `SensPatankarResult`
- **`SensPatankarResult`** (Stores simulation outputs)
    - Concentration profiles in packaging (`Cx`) and food (`CF`)
    - Time-dependent fluxes
    - Includes interpolation and visualization methods

**Integration with SFPPy Modules:**
- Requires `layer.py` to define multilayer structures.
- Uses `food.py` to set food contact conditions.
- Relies on `property.py` for migration parameters (D, K).
- Calls `geometry.py` when volume/surface area calculations are needed.

Example:
```python
from patankar.migration import senspatankar
solution = senspatankar(multilayer, medium)
solution.plotCF()
```


===============================================================================
Details
===============================================================================

This module provides a solver (``senspatankar``) to simulate in 1D the mass transfer of a substance
initially distributed into a multilayer packaging structure (``layer``) into a contacting medium (``foodlayer``).
It uses a finite-volume method adapted from the Patankar scheme to handle partition coefficients between all layers,
as well as between the food and the contact layer (food is on the left). The right boundary condition is assumed
impervious (no mass transfer at the right edge).

The numerical method has been published here:
    Nguyen, P.-M., Goujon, A., Sauvegrain, P. and Vitrac, O. (2013),
    A computer-aided methodology to design safe food packaging and related systems.
    AIChE J., 59: 1183-1212. https://doi.org/10.1002/aic.14056

The module offers :
    - methods to simulate mass transfer under various boudary conditions (Robin, impervious, periodic),
    - simulation chaining
    - result management (merging, edition...)
    - plotting and printing to disk capabilities


Classes
-------
- SensPatankarResult

Functions
---------
- senspatankar(multilayer, medium, t=None, autotime=True, timescale="sqrt", ntimes=1e4, RelTol=1e-4, AbsTol=1e-4)

Example
-------
```python

    from patankar.food import ethanol
    from patankar.layer import layer

    # Create medium and layers
    medium = ethanol()
    A = layer(layername="layer A")
    B = layer(layername="layer B")
    multilayer = A + B

    # Run solver
    sol = senspatankar(multilayer, medium)

    # Plot results
    sol.plotCF()
    sol.plotC()
```

@version: 1.24
@project: SFPPy - SafeFoodPackaging Portal in Python initiative
@author: INRAE\\olivier.vitrac@agroparistech.fr
@licence: MIT
@Date: 2022-01-17
@rev: 2025-03-05

"""
# Dependencies
import os
import random
import re
from datetime import datetime
from copy import deepcopy as duplicate
# math libraries
import numpy as np
from scipy.integrate import solve_ivp
from scipy.sparse import diags, coo_matrix
from scipy.interpolate import interp1d
from scipy.integrate import simpson, cumulative_trapezoid
from scipy.optimize import minimize
# plot libraries
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib.colors as mcolors
from matplotlib.figure import Figure
# data libraries
import pandas as pd

# Local dependencies
from patankar.layer import layer, check_units, layerLink
from patankar.food import foodphysics,foodlayer

__all__ = ['CFSimulationContainer', 'Cprofile', 'PrintableFigure', 'SensPatankarResult', 'autoname', 'check_units', 'colormap', 'compute_fc_profile_PBC', 'compute_fv_profile', 'custom_plt_figure', 'custom_plt_subplots', 'foodlayer', 'foodphysics', 'is_valid_figure', 'layer', 'layerLink', 'print_figure', 'print_pdf', 'print_png', 'restartfile', 'restartfile_senspantakar', 'rgb', 'senspatankar', 'tooclear']

__project__ = "SFPPy"
__author__ = "Olivier Vitrac"
__copyright__ = "Copyright 2022"
__credits__ = ["Olivier Vitrac"]
__license__ = "MIT"
__maintainer__ = "Olivier Vitrac"
__email__ = "olivier.vitrac@agroparistech.fr"
__version__ = "1.24"

# Plot configuration (preferred units)
plotconfig = {
    "tscale": 24 * 3600, # days used as time scale
    "tunit": "days",
    "lscale": 1e-6, # µm
    "lunit": "µm",
    "Cscale": 1,
    "Cunit": "a.u."
    }
_fig_metadata_atrr_ = "__filename__"
# %% Private functions and classes

def autoname(nchars=6, charset="a-zA-Z0-9"):
    """
    Generates a random simulation name.

    Parameters:
    - nchars (int): Number of characters in the name (default: 6).
    - charset (str): Character set pattern (e.g., "a-zA-Z0-9").

    Returns:
    - str: A randomly generated name.
    """

    # Expand regex-like charset pattern
    char_pool = []
    # Find all ranges (e.g., "a-z", "A-Z", "0-9")
    pattern = re.findall(r'([a-zA-Z0-9])\-([a-zA-Z0-9])', charset)
    for start, end in pattern:
        char_pool.extend(chr(c) for c in range(ord(start), ord(end) + 1))
    # Include any explicit characters (e.g., "ABC" in "ABC0-9")
    explicit_chars = re.sub(r'([a-zA-Z0-9])\-([a-zA-Z0-9])', '', charset)  # Remove ranges
    char_pool.extend(explicit_chars)
    # Remove duplicates and sort (just for readability)
    char_pool = sorted(set(char_pool))
    # Generate random name
    return ''.join(random.choices(char_pool, k=nchars))

def is_valid_figure(fig):
    """
    Checks if `fig` is a valid and open Matplotlib figure.

    Parameters:
    - fig: object to check

    Returns:
    - bool: True if `fig` is a valid, open Matplotlib figure.
    """
    return isinstance(fig, Figure) and plt.fignum_exists(fig.number)

def _generate_figname(fig, extension):
    """
    Generate a clean filename based on metadata or current date/time.

    Parameters:
    - fig: Matplotlib figure object.
    - extension: File extension ('.pdf' or '.png').

    Returns:
    - str: Cleaned filename with correct extension.
    """
    # Try to retrieve the hidden filename metadata
    if hasattr(fig, _fig_metadata_atrr_):
        filename = getattr(fig, _fig_metadata_atrr_)
    else:
        # Default: Use date-time format if metadata is missing
        filename = "fig" + datetime.now().strftime("%Y%m%d_%H%M%S")
    # Clean filename (replace spaces, trim, remove special characters)
    filename = filename.strip().replace(" ", "_")
    # Ensure correct file extension
    if not filename.lower().endswith(extension):
        filename += extension
    return filename

def tooclear(color, threshold=0.6, correction=0.15):
    """
    Darkens a too-bright RGB(A) color tuple.

    Parameters:
    -----------
    color : tuple (3 or 4 elements)
        RGB or RGBA color in [0,1] range.
    threshold : float, optional (default=0.6)
        Grayscale threshold above which colors are considered too bright.
    correction : float, optional (default=0.15)
        Amount by which to darken too bright colors.

    Returns:
    --------
    tuple
        Adjusted RGB(A) color tuple with too bright colors darkened.

    Example:
    --------
    corrected_color = tooclear((0.9, 0.9, 0.7, 1.0))
    """
    if not isinstance(color, tuple) or len(color) not in [3, 4]:
        raise ValueError("Input must be an RGB or RGBA tuple.")
    rgb = color[:3]  # Extract RGB values
    # Compute grayscale brightness (mean of RGB channels)
    brightness = sum(rgb) / 3
    # Darken if brightness exceeds the threshold
    if brightness > threshold:
        rgb = tuple(max(0, c - correction) for c in rgb)
    return rgb + (color[3],) if len(color) == 4 else rgb  # Preserve alpha if present


def print_pdf(fig, filename="", destinationfolder=os.getcwd(), overwrite=False, dpi=300):
    """
    Save a given figure as a PDF.

    Parameters:
    - fig: Matplotlib figure object to be saved.
    - filename: str, PDF filename (auto-generated if empty).
    - destinationfolder: str, folder to save the file.
    - overwrite: bool, overwrite existing file.
    - dpi: int, resolution (default=300).
    """
    if not is_valid_figure(fig):
        print("no valid figure")
        return
    # Generate filename if not provided
    if not filename:
        filename = _generate_figname(fig, ".pdf")
    # Ensure full path
    filename = os.path.join(destinationfolder, filename)
    # Prevent overwriting unless specified
    if not overwrite and os.path.exists(filename):
        print(f"File {filename} already exists. Use overwrite=True to replace it.")
        return
    # Save figure as PDF
    fig.savefig(filename, format="pdf", dpi=dpi, bbox_inches="tight")
    print(f"Saved PDF: {filename}")


def print_png(fig, filename="", destinationfolder=os.getcwd(), overwrite=False, dpi=300):
    """
    Save a given figure as a PNG.

    Parameters:
    - fig: Matplotlib figure object to be saved.
    - filename: str, PNG filename (auto-generated if empty).
    - destinationfolder: str, folder to save the file.
    - overwrite: bool, overwrite existing file.
    - dpi: int, resolution (default=300).
    """
    if not is_valid_figure(fig):
        print("no valid figure")
        return
    # Generate filename if not provided
    if not filename:
        filename = _generate_figname(fig, ".png")
    # Ensure full path
    filename = os.path.join(destinationfolder, filename)
    # Prevent overwriting unless specified
    if not overwrite and os.path.exists(filename):
        print(f"File {filename} already exists. Use overwrite=True to replace it.")
        return
    # Save figure as PNG
    fig.savefig(filename, format="png", dpi=dpi, bbox_inches="tight")
    print(f"Saved PNG: {filename}")


def print_figure(fig, filename="", destinationfolder=os.getcwd(), overwrite=False, dpi=300):
    """
    Save the figure in both PDF and PNG formats.

    Parameters:
    - fig: Matplotlib figure object to be saved.
    - filename: str, base filename (auto-generated if empty).
    - destinationfolder: str, folder to save the files.
    - overwrite: bool, overwrite existing files.
    - dpi: int, resolution (default=300).
    """
    if is_valid_figure(fig):
        print_pdf(fig, filename, destinationfolder, overwrite, dpi)
        print_png(fig, filename, destinationfolder, overwrite, dpi)
    else:
        print("no valid figure")


# Categorized colors with headers and spacing
COLOR_CATEGORIES = [
    ("White & Gray", ["White", "Snow", "Honeydew", "MintCream", "Azure", "AliceBlue", "GhostWhite", "WhiteSmoke",
                      "Seashell", "Beige", "OldLace", "FloralWhite", "Ivory", "AntiqueWhite", "Linen",
                      "LavenderBlush", "MistyRose", "Gray", "Gainsboro", "LightGray", "Silver", "DarkGray",
                      "DimGray", "LightSlateGray", "SlateGray", "DarkSlateGray", "Black"], 2),

    ("Red, Pink & Orange", ["Red", "LightSalmon", "Salmon", "DarkSalmon", "LightCoral", "IndianRed", "Crimson",
                            "FireBrick", "DarkRed", "", "Pink", "LightPink", "HotPink", "DeepPink", "PaleVioletRed",
                            "MediumVioletRed", "", "Orange", "DarkOrange", "Coral", "Tomato", "OrangeRed"], 1),

    ("Yellow & Brown", ["Yellow", "LightYellow", "LemonChiffon", "LightGoldenrodYellow", "PapayaWhip", "Moccasin",
                        "PeachPuff", "PaleGoldenrod", "Khaki", "DarkKhaki", "Gold", "", "Brown", "Cornsilk",
                        "BlanchedAlmond", "Bisque", "NavajoWhite", "Wheat", "BurlyWood", "Tan", "RosyBrown",
                        "SandyBrown", "Goldenrod", "DarkGoldenrod", "Peru", "Chocolate", "SaddleBrown",
                        "Sienna", "Maroon"], 2),

    ("Green", ["Green", "PaleGreen", "LightGreen", "YellowGreen", "GreenYellow", "Chartreuse", "LawnGreen", "Lime",
               "LimeGreen", "MediumSpringGreen", "SpringGreen", "MediumAquamarine", "Aquamarine", "LightSeaGreen",
               "MediumSeaGreen", "SeaGreen", "DarkSeaGreen", "ForestGreen", "DarkGreen", "OliveDrab", "Olive",
               "DarkOliveGreen", "Teal"], 0),

    ("Blue", ["Blue", "LightBlue", "PowderBlue", "PaleTurquoise", "Turquoise", "MediumTurquoise", "DarkTurquoise",
              "LightCyan", "Cyan", "Aqua", "DarkCyan", "CadetBlue", "LightSteelBlue", "SteelBlue", "LightSkyBlue",
              "SkyBlue", "DeepSkyBlue", "DodgerBlue", "CornflowerBlue", "RoyalBlue", "MediumBlue", "DarkBlue",
              "Navy", "MidnightBlue"], 0),

    ("Purple", ["Purple", "Lavender", "Thistle", "Plum", "Violet", "Orchid", "Fuchsia", "Magenta", "MediumOrchid",
                "MediumPurple", "Amethyst", "BlueViolet", "DarkViolet", "DarkOrchid", "DarkMagenta", "SlateBlue",
                "DarkSlateBlue", "MediumSlateBlue", "Indigo"], 0)
]
# Extract colors from Matplotlib
CSS_COLORS = {k.lower(): v for k, v in mcolors.CSS4_COLORS.items()}

def rgb():
    """Displays a categorized color chart with properly aligned headers."""
    ncols = len(COLOR_CATEGORIES)
    max_rows = max(len(colors) + spacing for _, colors, spacing in COLOR_CATEGORIES)
    fig, ax = plt.subplots(figsize=(ncols * 2.5, max_rows * 0.6))
    ax.set_xticks([])
    ax.set_yticks([])
    ax.set_frame_on(False)
    x_spacing = 1.8  # Horizontal spacing between columns
    y_spacing = 1.0  # Vertical spacing between color patches
    text_size = 13   # Increased text size by 50%
    for col_idx, (category, colors, extra_space) in enumerate(COLOR_CATEGORIES):
        y_pos = max_rows  # Start at the top
        ax.text(col_idx * x_spacing + (x_spacing - 0.2) / 2, y_pos + 1.2, category,
                fontsize=text_size + 2, fontweight='bold', ha='center')
        y_pos -= y_spacing  # Move down after title
        for color in colors:
            if color == "":  # Empty string is a spacer
                y_pos -= y_spacing * 0.5
                continue
            hexval = CSS_COLORS.get(color.lower(), "white")
            y_pos -= y_spacing  # Move down before drawing
            ax.add_patch(plt.Rectangle((col_idx * x_spacing, y_pos), x_spacing - 0.2, y_spacing - 0.2, facecolor=hexval))
            r, g, b = mcolors.to_rgb(hexval)
            brightness = (r + g + b) / 3
            text_color = 'white' if brightness < 0.5 else 'black'
            ax.text(col_idx * x_spacing + (x_spacing - 0.2) / 2, y_pos + y_spacing / 2, color, ha='center',
                    va='center', fontsize=text_size, color=text_color)
        y_pos -= extra_space * y_spacing
    ax.set_xlim(-0.5, ncols * x_spacing)
    ax.set_ylim(-0.5, max_rows * y_spacing + 2)
    plt.tight_layout()
    plt.show()

# return colormaps
def colormap(name="viridis", ncolors=16, tooclearflag=True, reverse=False):
    """
    Generates a list of `ncolors` colors from the specified colormap.

    Parameters:
    -----------
    name : str, optional (default="viridis")
        Name of the Matplotlib colormap to use.
    ncolors : int, optional (default=16)
        Number of colors to generate.
    tooclearflag : bool, optional (default=True)
        If True, applies `tooclear` function to adjust brightness.
    reverse : bool, optional (default=False)
        If True, reverses the colormap.

    Supported colormaps:
    --------------------
    - "viridis"
    - "jet"
    - "plasma"
    - "inferno"
    - "magma"
    - "cividis"
    - "turbo"
    - "coolwarm"
    - "spring"
    - "summer"
    - "autumn"
    - "winter"
    - "twilight"
    - "rainbow"
    - "hsv"

    Returns:
    --------
    list of tuples
        List of RGB(A) colors in [0,1] range.

    Raises:
    -------
    ValueError
        If the colormap name is not recognized.
    """
    cmap_name = name + "_r" if reverse else name  # Append "_r" to reverse colormap
    # Check if the colormap exists
    if cmap_name not in plt.colormaps():
        raise ValueError(f"Invalid colormap name '{cmap_name}'. Use one from: {list(plt.colormaps())}")

    cmap = plt.colormaps.get_cmap(cmap_name)  # Fetch the colormap
    colors = [cmap(i / (ncolors - 1)) for i in range(ncolors)]  # Normalize colors
    return [tooclear(c) if tooclearflag else c[:3] for c in colors]  # Apply tooclear if enabled

# Define PrintableFigure class
class PrintableFigure(Figure):
    """Custom Figure class with print methods."""

    def print(self, filename="", destinationfolder=os.getcwd(), overwrite=False, dpi=300):
        print_figure(self, filename, destinationfolder, overwrite, dpi)

    def print_png(self, filename="", destinationfolder=os.getcwd(), overwrite=False, dpi=300):
        print_png(self, filename, destinationfolder, overwrite, dpi)

    def print_pdf(self, filename="", destinationfolder=os.getcwd(), overwrite=False, dpi=300):
        print_pdf(self, filename, destinationfolder, overwrite, dpi)

# ✅ Override `plt.figure()` and `plt.subplots()` to always use PrintableFigure
original_plt_figure = plt.figure
original_plt_subplots = plt.subplots

def custom_plt_figure(*args, **kwargs):
    """Ensure all figures are PrintableFigure."""
    kwargs.setdefault("FigureClass", PrintableFigure)
    return original_plt_figure(*args, **kwargs)

def custom_plt_subplots(*args, **kwargs):
    """Ensure plt.subplots() returns a PrintableFigure."""
    kwargs.setdefault("FigureClass", PrintableFigure)
    fig, ax = original_plt_subplots(*args, **kwargs)
    return fig, ax

# Apply overrides
plt.figure = custom_plt_figure
plt.subplots = custom_plt_subplots
plt.rcParams['figure.figsize'] = (8, 6)  # Optional default size


# %% Generic Classes to manipulate results
class Cprofile:
    """
    A class representing a concentration profile C(x) for migration simulations.

    This class allows storing, interpolating, and analyzing the concentration of a
    migrating substance across a spatial domain.

    Attributes:
    -----------
    x : np.ndarray
        1D array of spatial positions.
    Cx : np.ndarray
        1D array of corresponding concentration values at `x`.

    Methods:
    --------
    interp(x_new)
        Interpolates the concentration at new spatial positions.
    integrate()
        Computes the integral of the concentration profile.
    mean_concentration()
        Computes the mean concentration over the spatial domain.
    find_indices_xrange(x_range)
        Returns indices where `x` falls within a specified range.
    find_indices_Cxrange(Cx_range)
        Returns indices where `Cx` falls within a specified concentration range.
    assign_values(indices, values)
        Assigns new concentration values at specified indices.

    Example:
    --------
    ```python
    x = np.linspace(0, 1, 10)
    Cx = np.exp(-x)
    profile = Cprofile(x, Cx)

    # Interpolating at new points
    new_x = np.linspace(0, 1, 50)
    interpolated_Cx = profile.interp(new_x)
    ```
    """

    def __init__(self, x=None, Cx=None):
        """Initialize the concentration profile Cx(x)."""
        if x is None or Cx is None:
            raise ValueError("Syntax: myprofile = Cprofile(x, Cx). Both x and Cx are mandatory.")
        self.x = np.array(x, dtype=float).reshape(-1)  # Ensure 1D NumPy array
        self.Cx = np.array(Cx, dtype=float).reshape(-1)  # Ensure 1D NumPy array
        # Check if x is strictly increasing
        if np.any(np.diff(self.x) <= 0):
            raise ValueError("x values must be strictly increasing.")
        # Create the interpolation function
        self._interp_func = interp1d(
            self.x, self.Cx, kind="linear", fill_value=0, bounds_error=False
        )

    def interp(self, x_new):
        """
        Interpolate concentration values at new x positions.

        Parameters:
            x_new (array-like): New positions where concentrations are needed.

        Returns:
            np.ndarray: Interpolated concentration values.
        """
        x_new = np.array(x_new, dtype=float)  # Ensure NumPy array
        return self._interp_func(x_new)

    def integrate(self):
        """
        Compute the integral of Cx over x using Simpson's rule.

        Returns:
            float: The integral ∫ Cx dx.
        """
        return simpson(self.Cx, self.x)

    def mean_concentration(self):
        """
        Compute the mean concentration using the integral.

        Returns:
            float: The mean value of Cx.
        """
        return self.integrate() / (self.x[-1] - self.x[0])

    def find_indices_xrange(self, x_range):
        """
        Find indices where x is within a specified range.

        Parameters:
            x_range (tuple): The (min, max) range of x.

        Returns:
            np.ndarray: Indices where x falls within the range.
        """
        xmin, xmax = x_range
        return np.where((self.x >= xmin) & (self.x <= xmax))[0]

    def find_indices_Cxrange(self, Cx_range=(0, np.inf)):
        """
        Find indices where Cx is within a specified range.

        Parameters:
            Cx_range (tuple): The (min, max) range of Cx.

        Returns:
            np.ndarray: Indices where Cx falls within the range.
        """
        Cmin, Cmax = Cx_range
        return np.where((self.Cx >= Cmin) & (self.Cx <= Cmax))[0]

    def assign_values(self, indices, values):
        """
        Assign new values to Cx at specified indices.

        Parameters:
            indices (array-like): Indices where values should be assigned.
            values (float or array-like): New values to assign.

        Raises:
            ValueError: If the number of values does not match the number of indices.
        """
        indices = np.array(indices, dtype=int)
        if np.isscalar(values):
            self.Cx[indices] = values  # Assign single value to all indices
        else:
            values = np.array(values, dtype=float)
            if values.shape[0] != indices.shape[0]:
                raise ValueError("Number of values must match the number of indices.")
            self.Cx[indices] = values

    def __repr__(self):
        """Representation of the profile."""
        stats_x = {
            "min": np.min(self.x),
            "max": np.max(self.x),
            "mean": np.mean(self.x),
            "median": np.median(self.x),
            "std": np.std(self.x),
        }
        stats_Cx = {
            "min": np.min(self.Cx),
            "max": np.max(self.Cx),
            "mean": np.mean(self.Cx),
            "median": np.median(self.Cx),
            "std": np.std(self.Cx),
        }

        print(
            f"Cprofile: {len(self.x)} points\n",
            f"x range: [{stats_x['min']:.4g}, {stats_x['max']:.4g}]\n",
            f"Cx range: [{stats_Cx['min']:.4g}, {stats_Cx['max']:.4g}]\n",
            f"x stats: mean={stats_x['mean']:.4g}, median={stats_x['median']:.4g}, std={stats_x['std']:.4g}\n",
            f"Cx stats: mean={stats_Cx['mean']:.4g}, median={stats_Cx['median']:.4g}, std={stats_Cx['std']:.4g}"
        )
        return str(self)

    def __str__(self):
        """Returns a formatted string representation of the profile."""
        return f"<{self.__class__.__name__}: including {len(self.x)} points>"



class SensPatankarResult:
    """
    Container for the results of the 1D mass transfer simulation performed by ``senspatankar``.

    Attributes
    ----------
    ttarget : ndarray with shape (1,)
        target simulation time
        It is a duration not an absolute time.
    CFtarget : ndarray with shape (1,)
        CF value at ttarget
    Cxtarget : ndarray with shape (npoints,)
         Cx concentration profile at t=ttarget
    t : ndarray with shape (ntimes,)
        1D array of time points (in seconds) covering from 0 to 2*ttarget
        It is a duration not an absolute time.
    C : ndarray with shape (ntimes,)
        1D array of mean concentration in the packaging (averaged over all packaging nodes)
        at each time step. Shape: (ntimes,).
    CF : ndarray with shape (ntimes,)
        1D array of concentration in the food (left boundary) at each time step. Shape: (ntimes,).
    fc : ndarray with shape (ntimes,)
        1D array of the cumulative flux into the food. Shape: (ntimes,).
    f : ndarray with shape (ntimes,)
        1D array of the instantaneous flux into the food. Shape: (ntimes,).
    x : ndarray with shape (npoints,)
        1D array of the position coordinates of all packaging nodes (including sub-nodes).
        npoints = 3 * number of original FV elements (interfaces e and w are included).
    Cx : ndarray with shape (ntimes,npoints)
        2D array of the concentration profile across the packaging thickness for each time step.
        Shape: (ntimes, 3 * number_of_nodes). Each row corresponds to one time step.
    tC : ndarray with shape (ntimes,)
        1D array of the dimensionless time points
    C0eq : ndarray with shape (1,)
        Reference (equilibrium) concentration scaling factor.
    timebase : float
        Characteristic time scale (l_ref^2 / D_ref) used to normalize the solution.
    interp_CF : scipy.interpolate._interpolate.interp1d
        1D interpolant of CF vs time
    interp_Cx : scipy.interpolate._interpolate.interp1d
        1F interpolant of Cx vs time
    restart : restartfile_senspatankar object
        Restart object (see restartfile_senspatankar doc)

    """

    def __init__(self, name, description, ttarget, t, C, CF, fc, f, x, Cx, tC, C0eq, timebase,
                 restart,restart_unsecure,xi,Cxi,
                 _plotconfig=None, createcontainer=True, container=None, discrete=False):
        """Constructor for simulation results."""
        self.name = name
        self.description = description
        self.ttarget = ttarget
        self.t = t
        self.C = C
        self.CF = CF
        self.fc = fc
        self.f = f
        self.x = x
        self.Cx = Cx
        self.tC = tC
        self.C0eq = C0eq
        self.timebase = timebase
        self.discrete = discrete  # New flag for discrete data

        # Interpolation for CF and Cx
        self.interp_CF = interp1d(t, CF, kind="linear", fill_value="extrapolate")
        self.CFtarget = self.interp_CF(ttarget)
        self.interp_Cx = interp1d(t, Cx.T, kind="linear", axis=1, fill_value="extrapolate")
        self.Cxtarget = self.interp_Cx(ttarget)

        # Restart handling
        if xi is not None and Cxi is not None:
            Cxi_interp = interp1d(t, Cxi.T, kind="linear", axis=1, fill_value="extrapolate")
            Cxi_at_t = Cxi_interp(ttarget)
            restart.freezeCF(ttarget, self.CFtarget)
            restart.freezeCx(xi, Cxi_at_t)
        self.restart = restart # secure restart file (cannot be modified from outside)
        self.restart_unsecure = restart_unsecure # unsecure one (can be modified from outside)

        # Plot configuration
        self._plotconfig = _plotconfig if _plotconfig else plotconfig

        # Store state for simulation chaining
        self.savestate(self.restart.inputs["multilayer"], self.restart.inputs["medium"])

        # Default container for results comparison
        if createcontainer:
            if container is None:
                self.comparison = CFSimulationContainer(name=name)
                currentname = "reference"
            elif isinstance(container, CFSimulationContainer):
                self.comparison = container
                currentname = name
            else:
                raise TypeError(f"container must be a CFSimulationContainer, not {type(container).__name__}")
            self.comparison.add(self, label=currentname, color="Crimson", linestyle="-", linewidth=2)

        # Distance pair
        self._distancepair = None


    def pseudoexperiment(self, npoints=25, std_relative=0.05, randomtime=False, autorecord=False, seed=None, t=None, CF=None, scale='linear'):
        """
        Generates discrete pseudo-experimental data from high-resolution simulated results.

        Parameters
        ----------
        npoints : int, optional
            Number of discrete time points to select (default: 25).
        std_relative : float, optional
            Relative standard deviation for added noise (default: 0.05).
        randomtime : bool, optional
            If True, picks random time points; otherwise, uses uniform spacing or a sqrt scale (default: False).
        autorecord : bool, optional
            If True, automatically adds the generated result to the container (default: False).
        seed : int, optional
            Random seed for reproducibility.
        t : list or np.ndarray, optional
            Specific time points to use instead of generated ones. If provided, `CF` must also be supplied.
        CF : list or np.ndarray, optional
            Specific CF values to use at the provided `t` time points. Must have the same length as `t`.
        scale : str, optional
            Determines how time points are distributed when `randomtime=False`:
            - "linear" (default): Uniformly spaced time points.
            - "sqrt": Time points are distributed more densely at the beginning using a square root scale.

        Returns
        -------
        SensPatankarResult
            A new SensPatankarResult object flagged as discrete.

        Raises
        ------
        ValueError
            If `t` and `CF` are provided but have mismatched lengths.
        """

        if seed is not None:
            np.random.seed(seed)

        if t is not None:
            t_discrete = np.array(t, dtype=float)
            if CF is None or len(CF) != len(t_discrete):
                raise ValueError("When providing t, CF values must be provided and have the same length.")
            CF_discrete_noisy = np.array(CF, dtype=float)
        else:
            if randomtime:
                t_discrete = np.sort(np.random.uniform(self.t.min(), self.t.max(), npoints))
            else:
                if scale == 'sqrt':
                    t_discrete = np.linspace(np.sqrt(self.t.min()), np.sqrt(self.t.max()), npoints) ** 2
                else:
                    t_discrete = np.linspace(self.t.min(), self.t.max(), npoints)

            CF_discrete = self.interp_CF(t_discrete)
            noise = np.random.normal(loc=0, scale=std_relative * CF_discrete)
            CF_discrete_noisy = CF_discrete + noise
            CF_discrete_noisy = np.clip(CF_discrete_noisy, a_min=0, a_max=None)

        discrete_result = SensPatankarResult(
            name=f"{self.name}_discrete",
            description=f"Discrete pseudo-experimental data from {self.name}",
            ttarget=self.ttarget,
            t=t_discrete,
            C=np.zeros_like(t_discrete),
            CF=CF_discrete_noisy,
            fc=np.zeros_like(t_discrete),
            f=np.zeros_like(t_discrete),
            x=self.x,
            Cx=np.zeros((len(t_discrete), len(self.x))),
            tC=self.tC,
            C0eq=self.C0eq,
            timebase=self.timebase,
            restart=self.restart,
            restart_unsecure=self.restart_unsecure,
            xi=None,
            Cxi=None,
            _plotconfig=self._plotconfig,
            discrete=True
        )
        if autorecord:
            self.comparison.add(discrete_result, label="pseudo-experiment", color="black", marker='o', discrete=True)
        return discrete_result

    @property
    def currrentdistance(self):
        """returns the square distance to the last distance pair"""
        return self.distanceSq(self._distancepair) if self._distancepair is not None else None

    def __sub__(self, other):
        """Overloads the operator - for returning a square distance function"""
        return lambda: self.distanceSq(other)

    def distanceSq(self, other, std_relative=0.05, npoints=100, cum=True):
        """
        Compute the squared distance between two SensPatankarResult instances.

        Parameters
        ----------
        other : SensPatankarResult
            The other instance to compare against.
        std_relative : float, optional
            Relative standard deviation for normalization (default: 0.05).
        npoints : int, optional
            Number of points for interpolation if both are continuous (default: 100).
        cum : bool, optional
            If True, return the cumulative sum; otherwise, return pointwise values.

        Returns
        -------
        float or np.ndarray
            The squared normalized error.

        Raises
        ------
        TypeError
            If `other` is not an instance of SensPatankarResult.
        ValueError
            If the time ranges do not overlap or if discrete instances have different time points.
        """
        if not isinstance(other, SensPatankarResult):
            raise TypeError(f"other must be a SensPatankarResult not a {type(other).__name__}")

        # refresh
        self._distancepair = other # used for distance evaluation as self.currentdistance
        # Find common time range
        tmin, tmax = max(self.t.min(), other.t.min()), min(self.t.max(), other.t.max())
        if tmin >= tmax:
            raise ValueError("No overlapping time range between instances.")
        if not self.discrete and not other.discrete:
            # Case 1: Both are continuous
            t_common = np.linspace(tmin, tmax, npoints)
            CF_self = self.interp_CF(t_common)
            CF_other = other.interp_CF(t_common)
        elif self.discrete and not other.discrete:
            # Case 2: self is discrete, other is continuous
            t_common = self.t
            CF_self = self.CF
            CF_other = other.interp_CF(self.t)
        elif not self.discrete and other.discrete:
            # Case 3: self is continuous, other is discrete
            t_common = other.t
            CF_self = self.interp_CF(other.t)
            CF_other = other.CF
        else:
            # Case 4: Both are discrete
            if not np.array_equal(self.t, other.t):
                raise ValueError("Discrete instances must have the same time points.")
            t_common = self.t
            CF_self = self.CF
            CF_other = other.CF
        # Compute squared normalized error
        m = (CF_self + CF_other) / 2
        m[m == 0] = 1  # Avoid division by zero, results in zero error where both are zero
        e2 = ((CF_self - CF_other) / (m * std_relative)) ** 2
        return np.sum(e2) if cum else e2

    def fit(self,other,disp=True,std_relative=0.05,maxiter=100,xatol=1e-3,fatol=1e-3):
        """Fits simulation parameters D and k to fit a discrete CF data"""
        if not isinstance(other,SensPatankarResult):
            raise TypeError(f"other must be a SensPatankarResult not a {type(other).__name__}")
        if self.discrete:
            raise ValueError("the current instance contains discrete data, use it as other")
        if not other.discrete:
            raise ValueError("only discrete CF results can be fitted")
        # retrieve current Dlink and klink
        Dlink = self.restart_unsecure.inputs["multilayer"].Dlink
        klink = self.restart_unsecure.inputs["multilayer"].klink
        if Dlink is None and klink is None:
            raise ValueError("provide at least a Dlink or klink object")
        if Dlink is not None and not isinstance(Dlink,layerLink):
            raise TypeError(f"Dlink must be a layerLink not a {type(Dlink).__name__}")
        if klink is not None and not isinstance(klink,layerLink):
            raise TypeError(f"klink must be a layerLink not a {type(klink).__name__}")
        # options for the optimizer
        optimOptions = {"disp": disp, "maxiter": maxiter, "xatol": xatol, "fatol": fatol}
        # params is assembled by concatenating -log(Dlink.values) and log(klink.values)
        params_initial = np.concatenate((-np.log(Dlink.values),np.log(klink.values)))
        maskD = np.concatenate((np.ones(Dlink.nzlength, dtype=bool), np.zeros(klink.nzlength, dtype=bool)))
        maskk = np.concatenate((np.zeros(Dlink.nzlength, dtype=bool), np.ones(klink.nzlength, dtype=bool)))
        # distance criterion
        d2 = lambda: self.distanceSq(other, std_relative=0.05) # d2 = lambda: self - other works also
        def objective(params):
            """objective function, all parameters are passed via layerLink"""
            logD = params[maskD]
            logk = params[maskk]
            Dlink.values = np.exp(-logD)
            klink.values = np.exp(logk)
            self.rerun(name="optimizer",color="OrangeRed",linewidth=4)
            return d2()
        def callback(params):
            """Called at each iteration to display current values."""
            Dtmp, ktmp = np.exp(-params[maskD]), np.exp(params[maskk])
            print("Fitting Iteration:\n",f"D={Dtmp} [m²/s]\n",f"k={ktmp} [a.u.]\n")
        # do the optimization
        result = minimize(objective,
                          params_initial,
                          method='Nelder-Mead',
                          callback=callback,
                          options=optimOptions)
        # extract the solution, be sure it is updated
        Dlink.values, klink.values = np.exp(-result.x[maskD]), np.exp(result.x[maskk])
        return result


    def savestate(self,multilayer,medium):
        """Saves senspantankar inputs for simulation chaining"""
        self._lastmedium = medium
        self._lastmultilayer = multilayer
        self._isstatesaved = True

    def update(self, **kwargs):
        """
        Update modifiable parameters of the SensPatankarResult object.
        Parameters:
            - name (str): New name for the object.
            - description (str): New description.
            - tscale (float or tuple): Time scale (can be tuple like (1, "day")).
            - tunit (str): Time unit.
            - lscale (float or tuple): Length scale (can be tuple like (1e-6, "µm")).
            - lunit (str): Length unit.
            - Cscale (float or tuple): Concentration scale (can be tuple like (1, "a.u.")).
            - Cunit (str): Concentration unit.
        """
        def checkunits(value):
            """Helper function to handle unit conversion for scale/unit tuples."""
            if isinstance(value, tuple) and len(value) == 2:
                scale, unit = check_units(value)
                scale, unit = np.array(scale, dtype=float), str(unit)  # Ensure correct types
                return scale.item(), unit  # Convert numpy array to float
            elif isinstance(value, (int, float, np.ndarray)):
                value = np.array(value, dtype=float)  # Ensure float
                return value.item(), None  # Return as float with no unit change
            else:
                raise ValueError(f"Invalid value for scale/unit: {value}")

        # Update `name` and `description` if provided
        if "name" in kwargs:
            self.name = str(kwargs["name"])
        if "description" in kwargs:
            self.description = str(kwargs["description"])
        # Update `_plotconfig` parameters
        for key in ["tscale", "tunit", "lscale", "lunit", "Cscale", "Cunit"]:
            if key in kwargs:
                value = kwargs[key]

                if key in ["tscale", "lscale", "Cscale"]:
                    value, unit = checkunits(value)  # Process unit conversion
                    self._plotconfig[key] = value
                    if unit is not None:
                        self._plotconfig[key.replace("scale", "unit")] = unit  # Ensure unit consistency
                else:
                    self._plotconfig[key] = str(value)  # Convert unit strings directly
        return self  # Return self for method chaining if needed


    def rerun(self,name=None,color=None,linestyle=None,linewidth=None, container=None, **kwargs):
        """
        Rerun the simulation (while keeping everything unchanged)
            This function is intended to be used with layerLinks for updating internally the parameters.
            R.rerun() stores the updated simulation results in R
            Rupdate = R.rerun() returns a copy of R while updating R

        note: Use R.resume() to resume/continue a simulation not rerun, to be used for sensitivity analysis/fitting.
        """
        F = self._lastmedium
        P = self._lastmultilayer
        if not isinstance(F, foodphysics):
            raise TypeError(f"the current object is corrupted, _lastmedium is {type(self._lastmedium).__name__}")
        if not isinstance(P, layer):
            raise TypeError(f"the current object is corrupted, _lastmultilayer is {type(self._lastmultilayer).__name__}")
        container = self.comparison if container is None else container
        if not isinstance(container,CFSimulationContainer):
            raise TypeError(f"the container should be a CFSimulationContainer not a {type(CFSimulationContainer).__name__}")
        # rerun the simulation using unsecure restart data
        inputs = self.restart_unsecure.inputs # all previous inputs
        R = senspatankar(multilayer=inputs["multilayer"],
                              medium=inputs["medium"],
                              name=name if name is not None else inputs["name"],
                              description=kwargs.get("description",inputs["description"]),
                              t=kwargs.get("t",inputs["t"]),
                              autotime=kwargs.get("autotime",inputs["autotime"]),
                              timescale=kwargs.get("timescale",inputs["timescale"]),
                              Cxprevious=inputs["Cxprevious"],
                              ntimes=kwargs.get("ntimes",inputs["ntimes"]),
                              RelTol=kwargs.get("RelTol",inputs["RelTol"]),
                              AbsTol=kwargs.get("AbsTol",inputs["AbsTol"]),
                              container=container)
        # Update numeric data in self whith those in R
        self.t = R.t
        self.C = R.C
        self.CF = R.CF
        self.fc = R.fc
        self.f = R.f
        self.x = R.x
        self.Cx = R.Cx
        self.tC = R.tC
        self.C0eq = R.C0eq
        self.timebase = R.timebase
        self.discrete = R.discrete
        self.interp_CF = R.interp_CF
        self.CFtarget = R.CFtarget
        self.interp_Cx = R.interp_Cx
        self.Cxtarget = R.Cxtarget
        # Update label, color, linestyle, linewidth for the new curve (-1: last in the container)
        # note if name already exists, the previous content is replaced
        self.comparison.update(-1, label=name, color=color, linestyle=linestyle, linewidth=linewidth)
        return self # for chaining


    def resume(self,t=None,**kwargs):
        """
        Resume simulation for a new duration (with all parameters are unchanged)

        For convenience user overrides are provided as:
            parameter = value
            with parameter = "name","description"..."RelTol","AbsTol" (see senspantankar)
        Use specifically:
            CF0 to assign a different concentration for the food
            Cx0 (Cprofile object) to assign a different concentration profile (not recommended)
            medium to set a different medium (food) in contact
        """

        # retrieve previous results
        previousCF = self.restart.CF # CF at at target
        previousCx = self.restart.Cprofile # corresponding profile
        previousmedium = self.restart.inputs["medium"].copy()
        previousmedium.CF0 = previousCF # we apply the concentration
        # CF override with CF=new value
        isCF0forced = "CF0" in kwargs
        newmedium = kwargs.get("medium",previousmedium)
        if isCF0forced:
            newCF0 = kwargs.get("CF0",previousCF)
            newmedium.CF0 = newCF0
        if t is None:
            ttarget = newmedium.get_param("contacttime",(10,"days"),acceptNone=False)
            t = 2*ttarget
        # Concentration profile override with Cx0=new profile
        newCx0 = kwargs.get("Cx0",previousCx)
        if not isinstance(newCx0,Cprofile):
            raise TypeError(f"Cx0 should be a Cprofile object not a {type(newCx0).__name__}")

        # extend the existing solution
        inputs = self.restart.inputs # all previous inputs
        newsol = senspatankar(multilayer=inputs["multilayer"],
                              medium=newmedium,
                              name=kwargs.get("name",inputs["name"]),
                              description=kwargs.get("description",inputs["description"]),
                              t=t,
                              autotime=kwargs.get("autotime",inputs["autotime"]),
                              timescale=kwargs.get("timescale",inputs["timescale"]),
                              Cxprevious=newCx0,
                              ntimes=kwargs.get("ntimes",inputs["ntimes"]),
                              RelTol=kwargs.get("RelTol",inputs["RelTol"]),
                              AbsTol=kwargs.get("AbsTol",inputs["AbsTol"]))
        return newsol


    def copy(self):
        """
        Creates a deep copy of the current SensPatankarResult instance.

        Returns
        -------
        SensPatankarResult
            A new instance with identical attributes as the original.
        """
        return SensPatankarResult(
            name=self.name,
            description=self.description,
            ttarget=self.ttarget,
            t=self.t.copy(),
            C=self.C.copy(),
            CF=self.CF.copy(),
            fc=self.fc.copy(),
            f=self.f.copy(),
            x=self.x.copy(),
            Cx=self.Cx.copy(),
            tC=self.tC.copy(),
            C0eq=self.C0eq,
            timebase=self.timebase,
            restart=self.restart,
            restart_unsecure=self.restart_unsecure,
            xi=None,
            Cxi=None,
            _plotconfig=self._plotconfig,
            discrete=self.discrete
        )

    def chaining(self,multilayer,medium,**kwargs):
        sim = self.resume(multilayer=multilayer,medium=medium,**kwargs)
        medium.lastsimulation = sim # store the last simulation result in medium
        medium.lastinput = multilayer # store the last input (in medium)
        sim.savestate(multilayer,medium) # store store the inputs in sim for chaining
        return sim

    # overloading operation
    def __rshift__(self, medium):
        """Overloads >> to propagate migration to food."""
        if not isinstance(medium,foodphysics):
            raise TypeError(f"medium must be a foodphysics object not a {type(medium).__name__}")
        if not self._isstatesaved:
            raise RuntimeError("The previous inputs were not saved within the instance.")
        # we update the contact temperature (see example3)
        return self.chaining(medium>>self._lastmultilayer,medium,CF0=self.restart.CF)

    def __add__(self, other):
        """Concatenate two solutions"""
        if not isinstance(other, SensPatankarResult):
            raise TypeError("Can only add two SensPatankarResult objects")

        # Ensure compatibility of x-axis
        if not np.isclose(self.x[0], other.x[0]) or not np.isclose(self.x[-1], other.x[-1]):
            raise ValueError("Mismatch in x-axis boundaries between solutions")

        # Interpolate other.Cx onto self.x
        interp_Cx_other = interp1d(other.x, other.Cx.T, kind="linear", fill_value=0, axis=0)
        Cx_other_interp = interp_Cx_other(self.x).T  # Ensuring shape (ntimes, npoints)

        # Restrict times for valid merging
        valid_indices_self = self.t <= self.ttarget
        valid_indices_other = (other.t > 0) #& (other.t <= other.ttarget)
        t_self = self.t[valid_indices_self]
        t_other = other.t[valid_indices_other] + self.ttarget  # Shift time

        # Merge time arrays without duplicates
        t_merged = np.unique(np.concatenate((t_self, t_other)))
        tC_merged = np.unique(np.concatenate((self.tC[valid_indices_self], other.tC[valid_indices_other])))

        # Merge concentration-related attributes
        C_merged = np.concatenate((self.C[valid_indices_self], other.C[valid_indices_other]))
        CF_merged = np.concatenate((self.CF[valid_indices_self], other.CF[valid_indices_other]))
        fc_merged = np.concatenate((self.fc[valid_indices_self], other.fc[valid_indices_other]))
        f_merged = np.concatenate((self.f[valid_indices_self], other.f[valid_indices_other]))

        # Merge concentration profiles
        Cx_merged = np.vstack((self.Cx[valid_indices_self], Cx_other_interp[valid_indices_other]))

        # Merged description
        if self.description and other.description:
            merged_description = f"Merged: {self.description} & {other.description}"
        elif self.description:
            merged_description = self.description
        elif other.description:
            merged_description = other.description
        else:
            merged_description = ""

        # Create new instance with merged data
        merged_result = SensPatankarResult(
            name=f"{self.name} + {other.name}" if self.name!=other.name else self.name,
            description=merged_description,
            ttarget=self.ttarget + other.ttarget,
            t=t_merged,
            C=C_merged,
            CF=CF_merged,
            fc=fc_merged,
            f=f_merged,
            x=self.x,  # Keep self.x as reference
            Cx=Cx_merged,
            tC=tC_merged,
            C0eq=self.C0eq,  # Keep self.C0eq
            timebase=other.timebase,  # Take timebase from other
            restart=other.restart,  # Take restart from other (the last valid one)
            restart_unsecure=other.restart_unsecure,  # Take restart from other (the last valid one)
            xi=None,  # xi and Cxi values are available
            Cxi=None  # only from a fresh simulation
        )

        return merged_result

    def interpolate_CF(self, t, kind="linear", fill_value="extrapolate"):
        """
        Interpolates the concentration in the food (CF) at given time(s).

        Parameters
        ----------
        t : float, list, tuple, or ndarray
            Time(s) at which to interpolate CF values.
            - If a tuple, it should be (value or list, unit) and will be converted to SI.
            - If a scalar or list, it is assumed to be in SI units already.
        kind : str, optional
            Interpolation method. Default is "linear".
            Possible values:
            - "linear": Piecewise linear interpolation (default).
            - "nearest": Nearest-neighbor interpolation.
            - "zero": Zero-order spline interpolation.
            - "slinear", "quadratic", "cubic": Spline interpolations of various orders.
        fill_value : str or float, optional
            Specifies how to handle values outside the given range.
            - "extrapolate" (default): Extrapolates values beyond available data.
            - Any float: Uses a constant value for out-of-bounds interpolation.

        Returns
        -------
        ndarray
            Interpolated CF values at the requested time(s).
        """
        # Convert time input to SI units if provided as a tuple
        if isinstance(t, tuple):
            t, _ = check_units(t)  # Convert to numeric array

        # Ensure t is a NumPy array for vectorized operations
        t = np.atleast_1d(t)

        # Create the interpolant on demand with user-defined settings
        interp_function = interp1d(self.t, self.CF, kind=kind, fill_value=fill_value, bounds_error=False)

        # Return interpolated values
        return interp_function(t)


    def __repr__(self):
        ntimes = len(self.t)
        nx = self.Cx.shape[1] if self.Cx.ndim > 1 else len(self.x)
        tmin, tmax = self.t.min(), self.t.max()
        xmin, xmax = self.x.min(), self.x.max()

        print(f"SensPatankarResult: {self.name}\n"
              f"\t {self.description if self.description != '' else '<no description>'}\n"
              f"\t - with {ntimes} time steps\n",
              f"\t - with {nx} spatial points\n"
              f"\t - Time range: [{tmin:.2e}, {tmax:.2e}] s\n"
              f"\t - Position range: [{xmin:.2e}, {xmax:.2e}] m")

        return str(self)


    def __str__(self):
        return (f'<{self.__class__.__name__}:{self.name}: '
            f'CF({(self.ttarget / plotconfig["tscale"]).item():.4g} [{plotconfig["tunit"]}]) = '
            f'{(self.CFtarget / plotconfig["Cscale"]).item():.4g} [{plotconfig["Cunit"]}]>')



    def plotCF(self, t=None, trange=None):
        """
        Plot the concentration in the food (CF) as a function of time.

        - If `self.discrete` is True, plots discrete points.
        - If `self.discrete` is False, plots a continuous curve.
        - Highlights the target time(s).

        Parameters
        ----------
        t : float, list, or None, optional
            Specific time(s) for which the concentration should be highlighted.
            If None, defaults to `ttarget`.
        trange : None, float, or list [t_min, t_max], optional
            If None, the full profile is shown.
            If a float, it is treated as an upper bound (lower bound assumed 0).
            If a list `[t_min, t_max]`, the profile is limited to that range.
        """
        plt.rc('text', usetex=False) # Enable LaTeX formatting for Matplotlib
        # Extract plot configuration
        plotconfig = self._plotconfig
        # Ensure t is a list (even if a single value is given)
        if t is None:
            t_values = [self.ttarget]
        elif isinstance(t, (int, float)):
            t_values = [t]
        elif isinstance(t, np.ndarray):
            t_values = t.flatten()
        elif isinstance(t, tuple):
            t_values = check_units(t)[0]
        else:
            t_values = np.array(t)  # Convert to array
        # Interpolate CF values at given times
        CF_t_values = self.interp_CF(t_values)
        # Handle trange selection
        if trange is None:
            t_plot = self.t
            CF_plot = self.CF
        else:
            # Convert trange to a valid range
            if isinstance(trange, (int, float)):
                trange = [0, trange]  # Assume lower bound is 0
            elif len(trange) != 2:
                raise ValueError("trange must be None, a single float (upper bound), or a list of two values [t_min, t_max]")
            # Validate range
            t_min, t_max = trange
            if t_min < self.t.min() or t_max > self.t.max():
                print("Warning: trange values are outside the available time range and may cause extrapolation.")
            # Generate time values within range
            mask = (self.t >= t_min) & (self.t <= t_max)
            t_plot = self.t[mask]
            CF_plot = self.CF[mask]
        # Set up colormap for multiple target values
        cmap = plt.get_cmap('viridis', len(t_values))
        norm = mcolors.Normalize(vmin=min(t_values), vmax=max(t_values))
        # Create the figure
        fig, ax = plt.subplots(figsize=(8, 6))
        # Plot behavior depends on whether data is discrete
        if self.discrete:
            ax.scatter(t_plot / plotconfig["tscale"], CF_plot / plotconfig["Cscale"],
                       color='b', label='Concentration in Food (Discrete)', marker='o', alpha=0.7)
        else:
            ax.plot(t_plot / plotconfig["tscale"], CF_plot / plotconfig["Cscale"],
                    label='Concentration in Food', color='b')
        # Highlight each target time
        for i, tC in enumerate(t_values):
            color = tooclear(cmap(norm(tC))) if len(t_values) > 1 else 'r'  # Use colormap only if multiple t values

            # Vertical and horizontal lines
            ax.axvline(tC / plotconfig["tscale"], color=color, linestyle='--', linewidth=1)
            ax.axhline(CF_t_values[i] / plotconfig["Cscale"], color=color, linestyle='--', linewidth=1)
            # Highlight points
            ax.scatter(tC / plotconfig["tscale"], CF_t_values[i] / plotconfig["Cscale"],
                       color=color, edgecolor='black', zorder=3, marker='D')
            # Annotate time
            ax.text(tC / plotconfig["tscale"], min(CF_plot) / plotconfig["Cscale"],
                    f'{(tC / plotconfig["tscale"]).item():.2f} {plotconfig["tunit"]}',
                    verticalalignment='bottom', horizontalalignment='right', rotation=90, fontsize=10, color=color)
            # Annotate concentration
            ax.text(min(t_plot) / plotconfig["tscale"], CF_t_values[i] / plotconfig["Cscale"],
                    f'{(CF_t_values[i] / plotconfig["Cscale"]).item():.2f} {plotconfig["Cunit"]}',
                    verticalalignment='bottom', horizontalalignment='left', fontsize=10, color=color)
        # Labels and title
        ax.set_xlabel(f'Time [{plotconfig["tunit"]}]')
        ax.set_ylabel(f'Concentration in Food [{plotconfig["Cunit"]}]')
        title_main = "Concentration in Food vs. Time"
        if self.discrete:
            title_main += " (Discrete Data)"
        title_sub = rf"$\bf{{{self.name}}}$" + (f": {self.description}" if self.description else "")
        ax.set_title(f"{title_main}\n{title_sub}", fontsize=10)
        #ax.text(0.5, 1.05, title_sub, fontsize=8, ha="center", va="bottom", transform=ax.transAxes)
        ax.legend()
        ax.grid(True)
        plt.show()
        # Store metadata
        setattr(fig, _fig_metadata_atrr_, f"pltCF_{self.name}")
        return fig



    def plotCx(self, t=None, nmax=15):
        """
        Plot the concentration profiles (Cx) in the packaging vs. position (x) for different times,
        using a color gradient similar to Parula, based on time values (not index order).
        Additionally, highlight the concentration profile at `ttarget` with a thick black line.

        Parameters
        ----------
        t : list, array-like, or None, optional
            List of specific times to plot. Only valid values (inside self.t) are used.
            If None, time values are selected using sqrt-spaced distribution.
        nmax : int, optional
            Maximum number of profiles to plot. The default is 15.
        """
        plt.rc('text', usetex=False) # Enable LaTeX formatting for Matplotlib
        # short circuit
        if self.discrete:
            print("discrete SensPatankarResult instance does not contain profile data, nothing to plot.")
            return None
        # extract plotconfig
        plotconfig = self._plotconfig
        # Ensure time values are within the available time range
        if t is None:
            # Default: Select `nmax` time values using sqrt-spacing
            nt = len(self.t)
            if nt <= nmax:
                t_values = self.t
            else:
                sqrt_t = np.sqrt(self.t)
                sqrt_t_values = np.linspace(sqrt_t[0], sqrt_t[-1], nmax)
                t_values = sqrt_t_values**2
        else:
            # Use user-specified time values
            if isinstance(t,tuple):
                t_values = check_units(t)[0]
            else:
                t_values = np.array(t)
            # Keep only valid times inside `self.t`
            t_values = t_values[(t_values >= self.t.min()) & (t_values <= self.t.max())]
            if len(t_values) == 0:
                print("Warning: No valid time values found in the specified range.")
                return
            # If more than `nmax`, keep the first `nmax` values
            t_values = t_values[:nmax]
        # Normalize time for colormap (Ensure at least one valid value)
        norm = mcolors.Normalize(vmin=t_values.min()/plotconfig["tscale"],
                                 vmax=t_values.max()/plotconfig["tscale"]) if len(t_values) > 1 \
            else mcolors.Normalize(vmin=self.t.min()/plotconfig["tscale"],
                                   vmax=self.t.max()/plotconfig["tscale"])
        cmap = plt.get_cmap('viridis', nmax)  # 'viridis' is similar to Parula
        # new figure
        fig, ax = plt.subplots(figsize=(8, 6))  # Explicitly create a figure and axis
        # Plot all valid concentration profiles with time-based colormap
        for tC in t_values:
            C = self.interp_Cx(tC)
            color = tooclear(cmap(norm(tC/plotconfig["tscale"])))  # Get color from colormap
            ax.plot(self.x / plotconfig["lscale"], C / plotconfig["Cscale"],
                    color=color, alpha=0.9, label=f't={tC / plotconfig["tscale"]:.3g} {plotconfig["tunit"]}')
        # Highlight concentration profile at `ttarget`
        ax.plot(self.x / plotconfig["lscale"], self.Cxtarget / plotconfig["Cscale"], 'k-', linewidth=3,
                label=f't={self.ttarget[0] / plotconfig["tscale"]:.2g} {plotconfig["tunit"]} (target)')
        # Create ScalarMappable and add colorbar
        sm = cm.ScalarMappable(cmap=cmap, norm=norm)
        sm.set_array([])  # Needed for colorbar
        cbar = fig.colorbar(sm, ax=ax)  # Explicitly associate colorbar with axis
        cbar.set_label(f'Time [{plotconfig["tunit"]}]')
        ax.set_xlabel(f'Position [{plotconfig["lunit"]}]')
        ax.set_ylabel(f'Concentration in Packaging [{plotconfig["Cunit"]}]')
        title_main = "Concentration Profiles in Packaging vs. Position"
        title_sub = rf"$\bf{{{self.name}}}$" + (f": {self.description}" if self.description else "")
        ax.set_title(f"{title_main}\n{title_sub}", fontsize=10)
        ax.text(0.5, 1.05, title_sub, fontsize=8, ha="center", va="bottom", transform=ax.transAxes)
        ax.set_title(title_main)
        ax.grid(True)
        ax.legend()
        plt.show()
        # store metadata
        setattr(fig,_fig_metadata_atrr_,f"pltCx_{self.name}")
        return fig


# Container for multiple simulations
class CFSimulationContainer:
    """
    Container to store and compare multiple CF results from different simulations.

    Attributes
    ----------
    curves : dict
        Stores CF results with unique keys. Each entry contains:
        - 'label': Label used for legend.
        - 'tmin', 'tmax': Time range of the simulation.
        - 'interpolant': Interpolated CF function (if continuous).
        - 'times': Discrete time points (if discrete).
        - 'values': Discrete CF values (if discrete).
        - 'color': Assigned color for plotting.
        - 'linestyle': Line style (default is '-').
        - 'linewidth': Line width (default is 2).
        - 'marker': Marker style for discrete data.
        - 'markerfacecolor': Marker face color.
        - 'markersize': Marker size.
        - 'discrete': Boolean indicating discrete data.
    """

    def __init__(self,name="",description=""):
        """Initialize an empty container for CF results."""
        self.curves = {}
        self._name = name
        self._description = description
        self._plotconfig = plotconfig

    @property
    def name(self):
        return self._name or autoname(6)

    @property
    def description(self):
        return self._description or f"comparison of {len(self.curves)} curves"


    def add(self, simulation_result, label=None, color=None, linestyle="-", linewidth=2,
            marker='o', markerfacecolor='auto', markeredgecolor='black', markersize=6, discrete=False):
        """
        Add a CF result to the container.

        Parameters
        ----------
        simulation_result : SensPatankarResult
            The simulation result.
        discrete : bool, optional
            Whether the data is discrete.
        """
        if not isinstance(simulation_result, SensPatankarResult):
            raise TypeError(f"Expected SensPatankarResult, got {type(simulation_result).__name__}")
        label = label or f"plot{len(self.curves) + 1}"
        key = label[:80]
        if color is None:
            cmap = cm.get_cmap("tab10", len(self.curves) + 1)
            color = cmap(len(self.curves) % 10)
        if markerfacecolor == 'auto':
            markerfacecolor = color
        self.curves[key] = {
            "label": label,
            "color": color,
            "linestyle": linestyle,
            "linewidth": linewidth,
            "marker": marker,
            "markerfacecolor": markerfacecolor,
            "markeredgecolor": markeredgecolor,
            "markersize": markersize,
            "discrete": discrete
        }
        if discrete:
            self.curves[key].update({
                "times": simulation_result.t,
                "values": simulation_result.CF
            })
        else:
            self.curves[key].update({
                "tmin": simulation_result.t.min(),
                "tmax": simulation_result.t.max(),
                "interpolant": simulation_result.interp_CF
            })

    def delete(self, identifier):
        """
        Remove a stored curve by its index (int) or label (str).

        Parameters
        ----------
        identifier : int or str
            - If `int`, removes the curve at the specified index.
            - If `str`, removes the curve with the matching label.
        """
        if isinstance(identifier, int):
            key = self._get_key_by_index(identifier)
        elif isinstance(identifier, str):
            key = identifier[:40]  # Match the label-based key
            if key not in self.curves:
                print(f"No curve found with label '{identifier}'")
                return
        else:
            raise TypeError("Identifier must be an integer (index) or a string (label).")

    def __repr__(self):
        """Return a summary of stored CF curves including index numbers."""
        if not self.curves:
            return "<CFSimulationContainer: No stored curves>"
        repr_str = "<CFSimulationContainer: Stored CF Curves>\n"
        repr_str += "--------------------------------------------------\n"
        for index, (key, data) in enumerate(self.curves.items()):
            repr_str += (f"[{index}] Label: {data['label']} | "
                         f"Time: [{data['tmin']:.2e}, {data['tmax']:.2e}] s | "
                         f"Color: {data['color']} | "
                         f"Style: {data['linestyle']} | "
                         f"Width: {data['linewidth']}\n")
        return repr_str

    def _validate_indices(self, indices):
        """Helper function to check if indices are valid."""
        if isinstance(indices, int):
            indices = [indices]
        if not all(isinstance(i, int) and 0 <= i < len(self.curves) for i in indices):
            raise IndexError(f"Invalid index in {indices}. Must be between 0 and {len(self.curves) - 1}.")
        return indices

    def _get_keys_by_indices(self, indices):
        """Helper function to retrieve keys based on indices."""
        if isinstance(indices, (int, str)):
            indices = [indices]
        keys = []
        all_keys = list(self.curves.keys())
        for idx in indices:
            if isinstance(idx, int):
                if idx < 0:
                    idx += len(all_keys)
                if idx < 0 or idx >= len(all_keys):
                    raise IndexError(f"Index {idx} is out of range for curves.")
                keys.append(all_keys[idx])
            elif isinstance(idx, str):
                if idx not in self.curves:
                    raise KeyError(f"Key '{idx}' does not exist in curves.")
                keys.append(idx)
            else:
                raise TypeError("Index must be an int, str, or a list of both.")
        return keys

    def update(self, index, label=None, linestyle=None, linewidth=None, color=None,
               marker=None, markersize=None, markerfacecolor=None, markeredgecolor=None):
        """
        Update properties of one or multiple curves.

        Parameters
        ----------
        index : int or list of int
            Index or indices of the curve(s) to update.
        label : str, optional
            New label for the curve(s).
        linestyle : str, optional
            New linestyle for the curve(s).
        linewidth : float, optional
            New linewidth for the curve(s).
        color : str or tuple, optional
            New color for the curve(s).
        marker : str, optional
            New marker style for discrete data.
        markersize : float, optional
            New marker size for discrete data.
        markerfacecolor : str or tuple, optional
            New marker face color.
        markeredgecolor : str or tuple, optional
            New marker edge color.
        """
        keys = self._get_keys_by_indices(index)

        for key in keys:
            if label is not None:
                self.curves[key]["label"] = label
            if linestyle is not None:
                self.curves[key]["linestyle"] = linestyle
            if linewidth is not None:
                self.curves[key]["linewidth"] = linewidth
            if color is not None:
                self.curves[key]["color"] = color
            if marker is not None:
                self.curves[key]["marker"] = marker
            if markersize is not None:
                self.curves[key]["markersize"] = markersize
            if markerfacecolor is not None:
                self.curves[key]["markerfacecolor"] = markerfacecolor
            if markeredgecolor is not None:
                self.curves[key]["markeredgecolor"] = markeredgecolor


    def label(self, index, new_label):
        """Change the label of one or multiple curves."""
        self.update(index, label=new_label)

    def linewidth(self, index, new_value):
        """Change the linewidth of one or multiple curves."""
        self.update(index, linewidth=new_value)

    def linestyle(self, index, new_style):
        """Change the linestyle of one or multiple curves."""
        self.update(index, linestyle=new_style)

    def color(self, index, new_color):
        """Change the color of one or multiple curves."""
        self.update(index, color=new_color)

    def marker(self, index, new_marker):
        """Change the marker style of one or multiple curves."""
        self.update(index, marker=new_marker)

    def markersize(self, index, new_size):
        """Change the marker size of one or multiple curves."""
        self.update(index, markersize=new_size)

    def markerfacecolor(self, index, new_facecolor):
        """Change the marker face color of one or multiple curves."""
        self.update(index, markerfacecolor=new_facecolor)

    def markeredgecolor(self, index, new_edgecolor):
        """Change the marker edge color of one or multiple curves."""
        self.update(index, markeredgecolor=new_edgecolor)

    def colormap(self, name="viridis", ncolors=16, tooclearflag=True, reverse=False):
        """
        Generates a list of `ncolors` colors from the specified colormap.

        Parameters:
        -----------
        name : str, optional (default="viridis")
            Name of the Matplotlib colormap to use.
        ncolors : int, optional (default=16)
            Number of colors to generate.
        tooclearflag : bool, optional (default=True)
            If True, applies `tooclear` function to adjust brightness.
        reverse : bool, optional (default=False)
            If True, reverses the colormap.

        Returns:
        --------
        list of tuples
            List of RGB(A) colors in [0,1] range.

        Raises:
        -------
        ValueError
            If the colormap name is not recognized.
        """
        return colormap(name, ncolors, tooclear, reverse)

    def viridis(self, ncolors=16, tooclear=True, reverse=False):
        """Generates colors from the Viridis colormap."""
        return colormap("viridis", ncolors, tooclear, reverse)

    def jet(self, ncolors=16, tooclear=True, reverse=False):
        """Generates colors from the Jet colormap."""
        return colormap("jet", ncolors, tooclear, reverse)


    def plotCF(self, t_range=None):
        """
        Plot all stored CF curves in a single figure.

        Parameters
        ----------
        t_range : tuple (t_min, t_max), optional
            Time range for plotting. If None, uses each curve's own range.
        plotconfig : dict, optional
            Dictionary with plotting configuration, containing:
            - "tunit": Time unit label (e.g., 's').
            - "Cunit": Concentration unit label (e.g., 'mg/L').
            - "tscale": Time scaling factor.
            - "Cscale": Concentration scaling factor.
        """
        plt.rc('text', usetex=True) # Enable LaTeX formatting for Matplotlib
        # extract plotconfig
        plotconfig = self._plotconfig

        if not self.curves:
            print("No curves to plot.")
            return

        fig, ax = plt.subplots(figsize=(8, 6))

        for data in self.curves.values():
            if data["discrete"]:
                # Discrete data plotting
                ax.scatter(data["times"], data["values"], label=data["label"],
                           color=data["color"], marker=data["marker"],
                           facecolor=data["markerfacecolor"], edgecolor=data["markeredgecolor"],
                           s=data["markersize"]**2)
            else:
                # Continuous data plotting
                t_min, t_max = data["tmin"], data["tmax"]
                if t_range:
                    t_min, t_max = max(t_min, t_range[0]), min(t_max, t_range[1])

                t_plot = np.linspace(t_min, t_max, 500)
                CF_plot = data["interpolant"](t_plot)
                ax.plot(t_plot, CF_plot, label=data["label"],
                        color=data["color"], linestyle=data["linestyle"], linewidth=data["linewidth"])

        # Configure the plot
        ax.set_xlabel(f'Time [{plotconfig["tunit"]}]' if plotconfig else "Time")
        ax.set_ylabel(f'Concentration in Food [{plotconfig["Cunit"]}]' if plotconfig else "CF")
        title_main = "Concentration in Food vs. Time"
        title_sub = rf"$\bf{{{self.name}}}$" + (f": {self.description}" if self.description else "")
        ax.set_title(f"{title_main}\n{title_sub}", fontsize=10)
        ax.text(0.5, 1.05, title_sub, fontsize=8, ha="center", va="bottom", transform=ax.transAxes)
        ax.set_title(title_main)
        ax.legend()
        ax.grid(True)
        plt.show()
        # store metadata
        setattr(fig,_fig_metadata_atrr_,f"cmp_pltCF_{self.name}")
        return fig


    def to_dataframe(self, t_range=None, num_points=1000, time_list=None):
        """
        Export interpolated CF data as a pandas DataFrame.
        Parameters:
        - t_range: tuple (t_min, t_max), optional
            The time range for interpolation (default: min & max of all stored results).
        - num_points: int, optional
            Number of points in the interpolated time grid (default: 100).
        - time_list: list or array, optional
            Explicit list of time points for interpolation (overrides t_range & num_points).
        Returns:
        - pd.DataFrame
            A DataFrame with time as index and CF values as columns (one per simulation).
        """
        if not self.curves:
            print("No data to export.")
            return pd.DataFrame()

        # Determine the time grid
        if time_list is not None:
            t_grid = np.array(time_list)
        else:
            all_t_min = min(data["tmin"] for data in self.curves.values())
            all_t_max = max(data["tmax"] for data in self.curves.values())
            # Default time range
            t_min, t_max = t_range if t_range else (all_t_min, all_t_max)
            # Create evenly spaced time grid
            t_grid = np.linspace(t_min, t_max, num_points)
        # Create DataFrame with time as index
        df = pd.DataFrame({"Time (s)": t_grid})

        # Interpolate each stored CF curve at the common time grid
        for key, data in self.curves.items():
            df[data["label"]] = data["interpolant"](t_grid)
        return df


    def save_as_excel(self, filename="CF_data.xlsx", destinationfolder=os.getcwd(), overwrite=False,
                      t_range=None, num_points=1000, time_list=None):
        """
        Save stored CF data to an Excel file.
        Parameters:
        - filename: str, Excel filename.
        - destinationfolder: str, where to save the file.
        - overwrite: bool, overwrite existing file.
        - t_range: tuple (t_min, t_max), optional
            The time range for interpolation (default: min & max of all stored results).
        - num_points: int, optional
            Number of points in the interpolated time grid (default: 100).
        - time_list: list or array, optional
            Explicit list of time points for interpolation (overrides t_range & num_points).
        """
        if not self.curves:
            print("No data to export.")
            return
        df = self.to_dataframe(t_range=t_range, num_points=num_points, time_list=time_list)
        filepath = os.path.join(destinationfolder, filename)
        if not overwrite and os.path.exists(filepath):
            print(f"File {filepath} already exists. Use overwrite=True to replace it.")
            return

        df.to_excel(filepath, index=False)
        print(f"Saved Excel file: {filepath}")


    def save_as_csv(self, filename="CF_data.csv", destinationfolder=os.getcwd(), overwrite=False,
                    t_range=None, num_points=200, time_list=None):
        """
        Save stored CF data to an Excel file.
        Parameters:
        - filename: str, Excel filename.
        - destinationfolder: str, where to save the file.
        - overwrite: bool, overwrite existing file.
        - t_range: tuple (t_min, t_max), optional
            The time range for interpolation (default: min & max of all stored results).
        - num_points: int, optional
            Number of points in the interpolated time grid (default: 100).
        - time_list: list or array, optional
            Explicit list of time points for interpolation (overrides t_range & num_points).
        """
        if not self.curves:
            print("No data to export.")
            return
        df = self.to_dataframe(t_range=t_range, num_points=num_points, time_list=time_list)
        filepath = os.path.join(destinationfolder, filename)
        if not overwrite and os.path.exists(filepath):
            print(f"File {filepath} already exists. Use overwrite=True to replace it.")
            return
        df.to_csv(filepath, index=False)
        print(f"Saved CSV file: {filepath}")


    def rgb(self):
        """Displays a categorized color chart with properly aligned headers."""
        plt.rc('text', usetex=False) # Enable LaTeX formatting for Matplotlib
        rgb()


# restartfile
class restartfile:
    """
    A container class for storing simulation restart data.

    This class facilitates storing and restoring simulation parameters and results,
    allowing simulations to be resumed or analyzed after computation.

    Methods:
    --------
    copy(what)
        Creates a deep copy of various data types to ensure safety in storage.

    Example:
    --------
    ```python
    restart = restartfile()
    copy_data = restart.copy([1, 2, 3])
    ```
    """
    @classmethod
    def copy(cls, what):
        """Safely copy a parameter that can be a float, str, dict, or a NumPy array"""
        if isinstance(what, (int, float, str, tuple,bool)):  # Immutable types (direct copy)
            return what
        elif isinstance(what, np.ndarray):  # NumPy array (ensure a separate copy)
            return np.copy(what)
        elif isinstance(what, dict):  # Dictionary (deep copy)
            return duplicate(what)
        elif what is None:
            return None
        else:  # Fallback for other complex types
            return duplicate(what)

# specific restartfile for senspatankar
class restartfile_senspantakar(restartfile):
    """
    Specialized restart file container for the `senspatankar` migration solver.

    This class stores the simulation inputs and computed results, enabling
    the resumption of a simulation from a saved state.

    Attributes:
    -----------
    inputs : dict
        Stores all initial simulation inputs.
    t : float or None
        Simulation time at the stored state.
    CF : float or None
        Concentration in food at the stored state.
    Cprofile : Cprofile or None
        Concentration profile at the stored state.

    Methods:
    --------
    freezeCF(t, CF)
        Saves the food concentration `CF` at time `t`.
    freezeCx(x, Cx)
        Saves the concentration profile `Cx` over `x`.

    Example:
    --------
    ```python
    restart = restartfile_senspatankar(multilayer, medium, name, description, ...)
    restart.freezeCF(t=1000, CF=0.05)
    ```
    """
    def __init__(self,multilayer,medium,name,description,
                 t,autotime,timescale,Cxprevious,
                 ntimes,RelTol,AbsTol,deepcopy=True):
        """constructor to be called at the intialization"""
        if deepcopy:
            inputs = {
                "multilayer":multilayer.copy(),
                "medium":medium.copy(),
                "name":restartfile.copy(name),
                "description":restartfile.copy(description),
                "t":restartfile.copy(t), # t is a duration not absolute time (it should not be reused)
                "autotime":restartfile.copy(autotime),
                "timescale":restartfile.copy(timescale),
                "Cxprevious":Cxprevious,
                "ntimes":restartfile.copy(ntimes),
                "RelTol":restartfile.copy(RelTol),
                "AbsTol":restartfile.copy(AbsTol)
                }
        else:
            inputs = {
                "multilayer":multilayer,
                "medium":medium,
                "name":name,
                "description":description,
                "t":t, # t is a duration not absolute time (it should not be reused)
                "autotime":autotime,
                "timescale":timescale,
                "Cxprevious":Cxprevious,
                "ntimes":ntimes,
                "RelTol":RelTol,
                "AbsTol":AbsTol
                }
        # inputs
        self.inputs = inputs
        # outputs
        self.t = None # no result yet
        self.CF = None # no result yet
        self.Cprofile = None # no result yet

    def freezeCF(self,t,CF):
        """Freeze the CF solution CF(t)"""
        self.t = t
        self.CF = CF

    def freezeCx(self,x,Cx):
        """Freeze the Cx solution Cx(x)"""
        self.Cprofile = Cprofile(x,Cx)

    def __repr__(self):
        """representation of the restart object"""
        if self.t is None:
            print("Restart file with no result")
        else:
            print(f"Restart file at t={self.t} with CF={self.CF}")
            print("Details of the profile:")
            repr(self.Cprofile)
        return str(self)

    def __str__(self):
        """Formatted representation of the restart object"""
        res = "no result" if self.t is None else f"solution at t={self.t}"
        return f"<{self.__class__.__name__}: {res}"


# %% Core function
def senspatankar(multilayer=None, medium=None,
                 name=f"senspatantkar:{autoname(6)}", description="",
                 t=None, autotime=True, timescale="sqrt", Cxprevious=None,
                 ntimes=1e3, RelTol=1e-6, AbsTol=1e-6,
                 container=None):
    """
    Simulates in 1D the mass transfer of a substance initially distributed in a multilayer
    packaging structure into a food medium (or liquid medium). This solver uses a finite-volume
    method adapted from Patankar to handle partition coefficients between all layers, and
    between the food and the contact layer.

    Two typical configurations are implemented:

    Configuration (PBC=False)
        - Robin (third-kind boundary condition) on the left (in contact with food)
        - Impervious boundary condition on the right (in contact with surrounding)

    Configuration (PBC=true)
        - periodic boundary condition between left and right to simulate infinite stacking or setoff

    The configuration nofood is a variant of PBC=False with h=Bi=0 (impervious boundary condition on the left).

    The behavior of the solver is decided by medium attributes (see food.py module).
    The property medium.PBC will determine whether periodic boundary conditions are used or not.


    Parameters
    ----------
    multilayer : layer
        A ``layer`` (or combined layers) object describing the packaging.
    medium : foodlayer or foodphysics
        A ``foodlayer`` object describing the food (or liquid) medium in contact.
    name : str, optional
        Simulation name, default = f"senspatantkar:{autoname(6)}" where autoname(6)
        is a random sequence of characters a-z A-Z 0-9
    description : str, optional
        Simulation description
    t : float or array_like, optional
        If a float is provided, it is taken as the total contact duration in seconds.
        If an array is provided, it is assumed to be time points where the solution
        will be evaluated. If None, it defaults to the contact time from the medium.
    autotime : bool, optional
        If True (default), an automatic time discretization is generated internally
        (linear or sqrt-based) between 0 and tmax (the maximum time). If False, the
        times in ``t`` are used directly.
    timescale : {"sqrt", "linear"}, optional
        Type of automatic time discretization if ``autotime=True``.
        "sqrt" (default) refines the early times more (useful for capturing rapid changes).
        "linear" uses a regular spacing.
    Cxprevious : Cprofile, optional (default=None)
        Concentration profile (from a previous simulation).
    ntimes : int, optional
        Number of time points in the automatically generated time vector if ``autotime=True``.
        The default is 1e3.
    RelTol : float, optional
        Relative tolerance for the ODE solver (``solve_ivp``). Default is 1e-4.
    AbsTol : float, optional
        Absolute tolerance for the ODE solver (``solve_ivp``). Default is 1e-4.

    Raises
    ------
    TypeError
        If ``multilayer`` is not a ``layer`` instance or ``medium`` is not a ``foodlayer`` instance,
        or if ``timescale`` is not a string.
    ValueError
        If an invalid ``timescale`` is given (not one of {"sqrt", "linear"}).

    Returns
    -------
    SensPatankarResult
        An object containing the time vector, concentration histories, fluxes, and
        spatial concentration profiles suitable for plotting and analysis.

    Notes
    -----
    - The geometry is assumed 1D: Food is on the left boundary, with a mass transfer coefficient
      `h = medium.h`, partition ratio `k0 = medium.k0`, and the packaging layers are to the right
      up to an impervious boundary.
    - Results are normalized internally using a reference layer (``iref``) specified in ``multilayer``.
      The reference layer is used to define dimensionless time (Fourier number Fo).
    - The dimensionless solution is solved by the Patankar approach with partition coefficients.

    Example
    -------
    .. code-block:: python

        from patankar.food import ethanol
        from patankar.layer import layer
        medium = ethanol()
        A = layer(layername="layer A")
        B = layer(layername="layer B")
        multilayer = A + B

        sol = senspatankar(multilayer, medium, autotime=True, timescale="sqrt")
        sol.plotCF()
        sol.plotC()
    """

    # Check arguments
    if not isinstance(multilayer, layer):
        raise TypeError(f"the input multilayer must be of class layer, not {type(multilayer).__name__}")
    if not isinstance(medium, (foodlayer,foodphysics)):
        raise TypeError(f"the input medium must be of class foodlayer, not {type(medium).__name__}")
    if not isinstance(timescale, str):
        raise TypeError(f"timescale must be a string, not {type(timescale).__name__}")

    # Refresh the physics of medium for parameters tunned by the end-user
    medium.refresh()

    # extract the PBC flag (True for setoff)
    PBC = medium.PBC

    # Restart file initialization (all parameters are saved - and cannot be changed)
    restart = restartfile_senspantakar(multilayer, medium, name,
            description, t, autotime, timescale, Cxprevious, ntimes, RelTol, AbsTol,deepcopy=True)
    # Restart file (unsecure version without deepcoy)
    restart_unsecure = restartfile_senspantakar(multilayer, medium, name,
            description, t, autotime, timescale, Cxprevious, ntimes, RelTol, AbsTol,deepcopy=False)

    # Contact medium properties
    CF0 = medium.get_param("CF0",0) # instead of medium.CF0 to get a fallback mechanism with nofood and setoff
    k0 = medium.get_param("k0",1)
    h = medium.get_param("h",0,acceptNone=False) # None will arise for PBC
    ttarget = medium.get_param("contacttime") # <-- ttarget is the time requested
    tmax = 2 * ttarget  # ensures at least up to 2*contacttime

    # Material properties
    k = multilayer.k / k0   # all k are normalized
    k0 = k0 / k0            # all k are normalized
    D = multilayer.D
    l = multilayer.l
    C0 = multilayer.C0

    # Validate/prepare time array
    if isinstance(t,tuple):
        t = check_units(t)[0]
    t = np.array(tmax if t is None else t, dtype=float) # <-- simulation time (longer than ttarget)
    if np.isscalar(t) or t.size == 1:
        t = np.array([0, t.item()],dtype=float)
    if t[0] != 0:
        t = np.insert(t, 0, 0)  # Ensure time starts at zero
    # Ensure t[-1] is greater than ttarget
    if t[-1] < ttarget.item():  # Convert ttarget to scalar before comparison
        t = np.append(t, [ttarget, 1.05*ttarget, 1.1*ttarget, 1.2*ttarget])  # Extend time array to cover requested time

    # Reference layer for dimensionless transformations
    iref = multilayer.referencelayer
    l_ref = l[iref]
    D_ref = D[iref]

    # Normalize lengths and diffusivities
    l_normalized = l / l_ref
    D_normalized = D / D_ref

    # Dimensionless time (Fourier number)
    timebase = l_ref**2 / D_ref
    Fo = t / timebase

    # Automatic time discretization if requested
    if autotime:
        if timescale.lower() == "linear":
            Fo_int = np.linspace(np.min(Fo), np.max(Fo), int(ntimes))
        elif timescale.lower() == "sqrt":
            Fo_int = np.linspace(np.sqrt(np.min(Fo)), np.sqrt(np.max(Fo)), int(ntimes))**2
        else:
            raise ValueError('timescale can be "sqrt" or "linear"')
        t = Fo_int * timebase
    else:
        Fo_int = Fo

    # L: dimensionless ratio of packaging to food volumes (scaled by reference layer thickness)
    A = medium.get_param("surfacearea",0)
    l_sum = multilayer.thickness
    VP = A * l_sum
    VF = medium.get_param("volume",1)
    LPF = VP / VF
    L = LPF * l_ref / l_sum

    # Bi: dimensionless mass transfer coefficient
    Bi = h * l_ref / D_ref

    # Compute equilibrium concentration factor
    sum_lL_C0 = np.sum(l_normalized * L * C0)
    sum_terms = np.sum((1 / k) * l_normalized * L)
    C0eq = (CF0 + sum_lL_C0) / (1 + sum_terms)
    if C0eq == 0:
        C0eq = 1.0

    # Normalize initial concentrations
    C0_normalized = C0 / C0eq
    CF0_normalized = CF0 / C0eq

    # Generate mesh (add offset x0 and concatenate them)
    meshes = multilayer.mesh()
    x0 = 0
    for i,mesh in enumerate((meshes)):
        mesh.xmesh += x0
        x0 += mesh.l
    xmesh = np.concatenate([m.xmesh for m in meshes])
    total_nodes = len(xmesh)

    # Positions of the interfaces (East and West)
    dw = np.concatenate([m.dw for m in meshes])
    de = np.concatenate([m.de for m in meshes])

    # Attach properties to nodes (flat interpolant)
    D_mesh = np.concatenate([D_normalized[m.index] for m in meshes])
    k_mesh = np.concatenate([k[m.index] for m in meshes])
    C0_mesh = np.concatenate([C0_normalized[m.index] for m in meshes])

    # Interpolate the initial solution if Cxprevious is supplied
    if Cxprevious is not None:
        if not isinstance(Cxprevious,Cprofile):
            raise TypeError(f"Cxprevisous should be a Cprofile object not a {type(Cxprevious).__name__}")
        C0_mesh = Cxprevious.interp(xmesh*l_ref) / C0eq # dimensionless

    # Conductances between the node and the next interface
    # item() is forced to avoid the (1,) Shape Issue (since NumPy 1.25)
    hw = np.zeros(total_nodes)
    he = np.zeros(total_nodes)
    if PBC:
        for i in range(total_nodes):
            prev = total_nodes-1 if i==0 else i-1
            hw[i] = (1 / ((de[prev] / D_mesh[prev] * k_mesh[prev] / k_mesh[i]) + dw[i] / D_mesh[i])).item()
    else:
        hw[0] = (1 / ((1 / k_mesh[0]) / Bi + dw[0] / D_mesh[0])).item()
        for i in range(1, total_nodes):
            hw[i] = (1 / ((de[i - 1] / D_mesh[i - 1] * k_mesh[i - 1] / k_mesh[i]) + dw[i] / D_mesh[i])).item()
    he[:-1] = hw[1:] # nodes are the center of FV elements: he = np.roll(hw, -1)
    he[-1]=hw[0] if PBC else 0.0 # we connect (PBC) or we enforce impervious (note that he was initialized to 0 already)

    if PBC: # periodic boundary condition

        # Assemble sparse matrix using COO format for efficient construction
        rows = np.zeros(3 * total_nodes, dtype=int) # row indices
        cols = np.zeros_like(rows) # col indices
        data = np.zeros_like(rows, dtype=np.float64) # values
        idx = 0
        for i in range(total_nodes):
            current = i
            west = (i-1) % total_nodes
            east = (i+1) % total_nodes
            denominator = dw[current] + de[current]
            k_current = k_mesh[current]
            k_west = k_mesh[west]
            k_east = k_mesh[east]
            # West neighbor
            rows[idx] = current
            cols[idx] = west
            data[idx] = hw[current] * k_west / k_current / denominator
            idx +=1
            # Diagonal
            rows[idx] = current
            cols[idx] = current
            data[idx] = (-hw[current] - he[current] * k_current/k_east) / denominator
            idx +=1
            # East neighbor
            rows[idx] = current
            cols[idx] = east
            data[idx] = he[current] / denominator
            idx +=1
        A = coo_matrix((data[:idx], (rows[:idx], cols[:idx])),
                     shape=(total_nodes, total_nodes)).tocsr()
        C_initial =  C0_mesh

    else: # Robin (left) + impervious (right) --> triband matrix

        # Assemble the tri-band matrix A as sparse for efficiency
        size = total_nodes + 1  # +1 for the food node
        main_diag = np.zeros(size)
        upper_diag = np.zeros(size - 1)
        lower_diag = np.zeros(size - 1)
        # Food node (index 0)
        main_diag[0] = (-L * hw[0] * (1 / k_mesh[0])).item()
        upper_diag[0] = (L * hw[0]).item()
        # Layer nodes
        for i in range(total_nodes):
            denom = dw[i] + de[i]
            if i == 0:
                main_diag[1] = (-hw[0] - he[0] * k_mesh[0] / k_mesh[1]) / denom
                upper_diag[1] = he[0] / denom
                lower_diag[0] = (hw[0] * (1 / k_mesh[0])) / denom
            elif i == total_nodes - 1:
                main_diag[i + 1] = (-hw[i]) / denom
                lower_diag[i] = (hw[i] * k_mesh[i - 1] / k_mesh[i]) / denom
            else:
                main_diag[i + 1] = (-hw[i] - he[i] * k_mesh[i] / k_mesh[i + 1]) / denom
                upper_diag[i + 1] = he[i] / denom
                lower_diag[i] = (hw[i] * k_mesh[i - 1] / k_mesh[i]) / denom
        A = diags([main_diag, upper_diag, lower_diag], [0, 1, -1], shape=(size, size), format='csr')
        C_initial = np.concatenate([CF0_normalized, C0_mesh])

    # ODE system: dC/dFo = A * C
    def odesys(_, C):
        return A.dot(C)

    sol = solve_ivp(   # <-- generic solver
        odesys,        # <-- our system (efficient sparse matrices)
        [Fo_int[0], Fo_int[-1]], # <-- integration range on Fourier scale
        C_initial,     # <-- initial solution
        t_eval=Fo_int, # <-- the solution is retrieved at these Fo values
        method='BDF',  # <-- backward differences are absolutely stable
        rtol=RelTol,   # <-- relative and absolute tolerances
        atol=AbsTol
    )

    # Check solution
    if not sol.success:
        print("Solver failed:", sol.message)

    # Extract solution
    if PBC:
        CF_dimless = np.full((sol.y.shape[1],), CF0 / C0eq)
        C_dimless = sol.y
        f = np.zeros_like(CF_dimless)
    else:
        CF_dimless = sol.y[0, :]
        C_dimless = sol.y[1:, :]
        # Robin flux
        f = hw[0] * (k0 * CF_dimless - C_dimless[0, :]) * C0eq

    # Compute cumulative flux
    fc = cumulative_trapezoid(f, t, initial=0)

    if PBC:
        # Build full (dimensionless) profile for plotting across each sub-node
        xfull, Cfull_dimless = compute_fc_profile_PBC(C_dimless, Fo_int, de, dw, he, hw, k_mesh, D_mesh, xmesh, xreltol=0)
        # Build full (dimensionless) profile for interpolation across each sub-node
        xfulli, Cfull_dimlessi = compute_fc_profile_PBC(C_dimless, Fo_int, de, dw, he, hw, k_mesh, D_mesh, xmesh, xreltol=1e-4)
    else:
        # Build full (dimensionless) profile for plotting across each sub-node
        xfull, Cfull_dimless = compute_fv_profile(xmesh, dw, de,C_dimless, k_mesh, D_mesh, hw, he, CF_dimless, k0, Fo_int, xreltol=0)
        # Build full (dimensionless) profile for interpolation across each sub-node
        xfulli, Cfull_dimlessi = compute_fv_profile(xmesh, dw, de,C_dimless, k_mesh, D_mesh, hw, he, CF_dimless, k0, Fo_int, xreltol=1e-4)


    # revert to dimensional concentrations
    CF = CF_dimless * C0eq
    Cx = Cfull_dimless * C0eq

    return SensPatankarResult(
        name=name,
        description=description,
        ttarget = ttarget,             # target time
        t=t,     # time where concentrations are calculated
        C= np.trapz(Cfull_dimless, xfull, axis=1)*C0eq,
        CF=CF,
        fc=fc,
        f=f,
        x=xfull * l_ref,           # revert to dimensional lengths
        Cx=Cx,
        tC=sol.t,
        C0eq=C0eq,
        timebase=timebase,
        restart=restart, # <--- restart info (inputs only)
        restart_unsecure=restart_unsecure,
        xi=xfulli*l_ref, # for restart only
        Cxi=Cfull_dimlessi*C0eq, # for restart only
        createcontainer = True,
        container=container
    )


# Exact FV interpolant (with Robin BC)
def compute_fv_profile(xmesh, dw, de, C_dimless, k_mesh, D_mesh, hw, he, CF_dimless, k0, Fo_int, xreltol=0):
    """
    Compute the full finite-volume concentration profile, including node values and interface values.
    (this function is not nested inside senspantar for better readability)

    Parameters:
        xmesh (np.ndarray): Node positions.
        dw (np.ndarray): Distance to west interfaces.
        de (np.ndarray): Distance to east interfaces.
        C_dimless (np.ndarray): Concentration at nodes.
        k_mesh (np.ndarray): Henri-like coefficient at nodes.
        D_mesh (np.ndarray): Diffusion coefficient at nodes.
        hw (np.ndarray): Conductance to west interface.
        he (np.ndarray): Conductance to east interface.
        CF_dimless (np.ndarray): Far-field (Food) concentration values.
        k0 (float): Partition coefficient at the boundary.
        Fo_int (np.ndarray): Time steps.
        xreltol (float, optional): Relative perturbation factor for interpolation accuracy. Defaults to 0.

    Returns:
        xfull (np.ndarray): Full spatial positions including nodes and interfaces.
        Cfull_dimless (np.ndarray): Full concentration profile.
    """
    num_nodes, num_timesteps = C_dimless.shape  # Extract shape

    # Compute xtol based on minimum interface distances
    xtol = np.min([np.min(de), np.min(dw)]) * xreltol

    # Adjust west and east interface positions
    xw = xmesh - dw + xtol  # Shift west interface
    xe = xmesh + de - xtol  # Shift east interface

    # Build full spatial profile
    xfull = np.empty(3 * num_nodes,dtype=np.float64)
    xfull[::3] = xw      # Every 3rd position is xw
    xfull[1::3] = xmesh  # Every 3rd position (offset by 1) is xmesh
    xfull[2::3] = xe     # Every 3rd position (offset by 2) is xe

    # Initialize concentration at interfaces
    Ce = np.zeros_like(C_dimless)  # East interfaces
    Cw = np.zeros_like(C_dimless)  # West interfaces

    # Compute Ce (east interface) for all timesteps at once
    Ce[:-1, :] = C_dimless[:-1, :] - (
        (de[:-1, None] * he[:-1, None] *
        ((k_mesh[:-1, None] / k_mesh[1:, None]) * C_dimless[:-1, :] - C_dimless[1:, :]))
        / D_mesh[:-1, None]
    )
    Ce[-1, :] = C_dimless[-1, :]  # Last node follows boundary condition

    # Compute Cw (west interface) for all timesteps at once
    Cw[1:, :] = C_dimless[1:, :] + (
        (dw[1:, None] * hw[1:, None] *
        ((k_mesh[:-1, None] / k_mesh[1:, None]) * C_dimless[:-1, :] - C_dimless[1:, :]))
        / D_mesh[1:, None]
    )

    # Compute Cw[0, :] separately to handle boundary condition
    Cw[0, :] = (C_dimless[0, :] + (
        dw[0] * hw[0] *
        (k0 / k_mesh[0] * CF_dimless - C_dimless[0, :])
        / D_mesh[0]
    )).flatten()  # Ensure correct shape

    # Interleave concentration values instead of using np.hstack and reshape
    Cfull_dimless = np.empty((num_timesteps, 3 * num_nodes),dtype=np.float64)
    Cfull_dimless[:, ::3] = Cw.T      # Every 3rd column is Cw
    Cfull_dimless[:, 1::3] = C_dimless.T  # Every 3rd column (offset by 1) is C
    Cfull_dimless[:, 2::3] = Ce.T      # Every 3rd column (offset by 2) is Ce

    return xfull, Cfull_dimless


def compute_fc_profile_PBC(C, t, de, dw, he, hw, k, D, xmesh, xreltol=0):
    """
    Computes the full concentration profile, including interface concentrations,
    for a system with periodic boundary conditions (PBC).

    This function calculates the concentrations at the east (`Ce`) and west (`Cw`)
    interfaces of each finite volume node, ensuring periodicity in the domain.

    Parameters
    ----------
    C : np.ndarray, shape (num_nodes, num_timesteps)
        Concentration values at each node for all time steps.
    t : np.ndarray, shape (num_timesteps,)
        Time points at which concentration profiles are computed.
    de : np.ndarray, shape (num_nodes,)
        Eastward diffusion lengths at each node.
    dw : np.ndarray, shape (num_nodes,)
        Westward diffusion lengths at each node.
    he : np.ndarray, shape (num_nodes,)
        Eastward mass transfer coefficients.
    hw : np.ndarray, shape (num_nodes,)
        Westward mass transfer coefficients.
    k : np.ndarray, shape (num_nodes,)
        Partition coefficients at each node.
    D : np.ndarray, shape (num_nodes,)
        Diffusion coefficients at each node.
    xmesh : np.ndarray, shape (num_nodes,)
        Spatial positions of the mesh points.
    xreltol : float, optional, default=0
        Relative tolerance applied to spatial positions to adjust interface locations.

    Returns
    -------
    xfull : np.ndarray, shape (3 * num_nodes,)
        Full spatial positions including center nodes and interface positions.
    Cfull : np.ndarray, shape (num_timesteps, 3 * num_nodes)
        Full concentration profiles, including node and interface values.

    Notes
    -----
    - This function enforces periodic boundary conditions by shifting indices in `C` and `k`.
    - Concentrations at the interfaces (`Ce` and `Cw`) are computed using the finite volume approach.
    - The result `Cfull` contains interleaved values: [Cw, C, Ce] for each node.

    Example
    -------
    ```python
    xfull, Cfull = compute_fc_profile_PBC(C, t, de, dw, he, hw, k, D, xmesh)
    ```
    """

    num_nodes, num_timesteps = C.shape  # Extract dimensions

    # Pre-calculate shifted indices for periodic BC
    east_shift = np.roll(np.arange(num_nodes), -1)  # Shift left (next node)
    west_shift = np.roll(np.arange(num_nodes), 1)   # Shift right (previous node)

    # Get shifted concentrations and diffusion coefficients
    C_east = C[east_shift, :]  # Shape (num_nodes, num_timesteps)
    C_west = C[west_shift, :]  # Shape (num_nodes, num_timesteps)
    k_east = k[east_shift][:, None]  # Make it broadcastable (num_nodes, 1)
    k_west = k[west_shift][:, None]  # Make it broadcastable (num_nodes, 1)

    # Eastern interface concentrations (vectorized)
    Ce = C - (de[:, None] * he[:, None] * ((k[:, None] / k_east) * C - C_east) / D[:, None])

    # Western interface concentrations (vectorized)
    Cw = C + (dw[:, None] * hw[:, None] * ((k_west / k[:, None]) * C_west - C) / D[:, None])

    # Create full concentration matrix with interfaces
    Cfull = np.empty((num_timesteps, 3*num_nodes),dtype=np.float64)

    # Compute positional tolerances
    xtol = np.min([np.min(de), np.min(dw)]) * xreltol
    xw = xmesh - dw + xtol  # Shifted west positions
    xe = xmesh + de - xtol  # Shifted east positions

    # Interleave values: West, Center, East
    Cfull[:, ::3] = Cw.T  # Ensure correct alignment
    Cfull[:, 1::3] = C.T
    Cfull[:, 2::3] = Ce.T

    # Create full position vector
    xfull = np.empty(3*num_nodes,dtype=np.float64)
    xfull[::3] = xw
    xfull[1::3] = xmesh
    xfull[2::3] = xe

    return xfull, Cfull

# %% test and debug
# -------------------------------------------------------------------
# Example usage (for debugging / standalone tests)
# -------------------------------------------------------------------
if __name__ == '__main__':
    from patankar.food import ethanol, setoff, nofood
    from patankar.layer import PP

    medium = ethanol()
    medium.CF0 = 100 # works
    medium.update(CF0=100) # works also
    A = layer(layername="layer A",k=2,C0=0,D=1e-16)
    B = layer(layername="layer B")
    multilayer = A + B
    sol1 = senspatankar(multilayer, medium,t=(25,"days"))
    sol1.plotCF(t=np.array([3,10,14])*24*3600)
    sol1.plotCx()
    r=sol1.restart
    repr(r)

    # extend the solution for 40 days
    sol2 = sol1.resume((40,"days"))
    sol2.plotCF()
    sol2.plotCx()

    # extend the solution for 60 days from sol2
    sol3 = sol2.resume((60,"days"))
    sol3.update(name="sol3")
    sol3.plotCF()
    sol3.plotCx()

    # merge the previous solutions 1+2
    # extend the solution for 60 days from sol12=sol1+sol2
    sol12 = sol1+sol2
    sol123a = sol12.resume((60,"days"))
    sol123a.update(name="sol123a")
    sol123a.plotCF()
    sol123a.plotCx()

    # concat
    sol123a_ = sol12 + sol123a
    sol123a_.update(name="sol123a_ (full): sol12 + sol123a")
    sol123a_.plotCF()

    # compare with sol1+sol2+sol3
    sol123_ = sol1+sol2+sol3
    sol123_.update(name="sol123_ (full): sol1+sol2+sol3")
    sol123_.plotCF()
    sol123_.plotCx()

    # simulation of setoff
    packstorage = setoff(contacttime=(100,"days"))
    A = PP(l=(500,"um"),C0=0)
    B = PP(l=(300,"um"),C0=5000)
    AB = A+B
    print(medium)
    solAB = senspatankar(AB,packstorage)
    solAB.plotCx()

    # we extend the previous solution by putting medium in contact
    solABext = solAB.resume(medium=medium)
    solABext.plotCF()
    solABext.plotCx()

Functions

def autoname(nchars=6, charset='a-zA-Z0-9')

Generates a random simulation name.

Parameters: - nchars (int): Number of characters in the name (default: 6). - charset (str): Character set pattern (e.g., "a-zA-Z0-9").

Returns: - str: A randomly generated name.

Expand source code
def autoname(nchars=6, charset="a-zA-Z0-9"):
    """
    Generates a random simulation name.

    Parameters:
    - nchars (int): Number of characters in the name (default: 6).
    - charset (str): Character set pattern (e.g., "a-zA-Z0-9").

    Returns:
    - str: A randomly generated name.
    """

    # Expand regex-like charset pattern
    char_pool = []
    # Find all ranges (e.g., "a-z", "A-Z", "0-9")
    pattern = re.findall(r'([a-zA-Z0-9])\-([a-zA-Z0-9])', charset)
    for start, end in pattern:
        char_pool.extend(chr(c) for c in range(ord(start), ord(end) + 1))
    # Include any explicit characters (e.g., "ABC" in "ABC0-9")
    explicit_chars = re.sub(r'([a-zA-Z0-9])\-([a-zA-Z0-9])', '', charset)  # Remove ranges
    char_pool.extend(explicit_chars)
    # Remove duplicates and sort (just for readability)
    char_pool = sorted(set(char_pool))
    # Generate random name
    return ''.join(random.choices(char_pool, k=nchars))
def check_units(value, ProvidedUnits=None, ExpectedUnits=None, defaulttempUnits='degC')

check numeric inputs and convert them to SI units

Expand source code
def check_units(value,ProvidedUnits=None,ExpectedUnits=None,defaulttempUnits="degC"):
    """ check numeric inputs and convert them to SI units """
    # by convention, NumPy arrays and None are return unchanged (prevent nesting)
    if isinstance(value,np.ndarray) or value is None:
        return value,UnknownUnits
    if isinstance(value,tuple):
        if len(value) != 2:
            raise ValueError('value should be a tuple: (value,"unit"')
        ProvidedUnits = value[1]
        value = value[0]
    if isinstance(value,list): # the function is vectorized
        value = np.array(value)
    if {"degC", "K"} & {ProvidedUnits, ExpectedUnits}: # the value is a temperature
        ExpectedUnits = defaulttempUnits if ExpectedUnits is None else ExpectedUnits
        ProvidedUnits = ExpectedUnits if ProvidedUnits is None else ProvidedUnits
        if ProvidedUnits=="degC" and ExpectedUnits=="K":
            value += constants["T0K"]
        elif ProvidedUnits=="K" and ExpectedUnits=="degC":
            value -= constants["T0K"]
        return np.array([value]),ExpectedUnits
    else: # the value is not a temperature
        ExpectedUnits = NoUnits if ExpectedUnits is None else ExpectedUnits
        if (ProvidedUnits==ExpectedUnits) or (ProvidedUnits==NoUnits) or (ExpectedUnits==None):
            conversion =1               # no conversion needed
            units = ExpectedUnits if ExpectedUnits is not None else NoUnits
        else:
            q0,conversion,units = toSI(qSI(1,ProvidedUnits))
        return np.array([value*conversion]),units
def colormap(name='viridis', ncolors=16, tooclearflag=True, reverse=False)

Generates a list of ncolors colors from the specified colormap.

Parameters:

name : str, optional (default="viridis") Name of the Matplotlib colormap to use. ncolors : int, optional (default=16) Number of colors to generate. tooclearflag : bool, optional (default=True) If True, applies tooclear() function to adjust brightness. reverse : bool, optional (default=False) If True, reverses the colormap.

Supported colormaps:

  • "viridis"
  • "jet"
  • "plasma"
  • "inferno"
  • "magma"
  • "cividis"
  • "turbo"
  • "coolwarm"
  • "spring"
  • "summer"
  • "autumn"
  • "winter"
  • "twilight"
  • "rainbow"
  • "hsv"

Returns:

list of tuples List of RGB(A) colors in [0,1] range.

Raises:

ValueError If the colormap name is not recognized.

Expand source code
def colormap(name="viridis", ncolors=16, tooclearflag=True, reverse=False):
    """
    Generates a list of `ncolors` colors from the specified colormap.

    Parameters:
    -----------
    name : str, optional (default="viridis")
        Name of the Matplotlib colormap to use.
    ncolors : int, optional (default=16)
        Number of colors to generate.
    tooclearflag : bool, optional (default=True)
        If True, applies `tooclear` function to adjust brightness.
    reverse : bool, optional (default=False)
        If True, reverses the colormap.

    Supported colormaps:
    --------------------
    - "viridis"
    - "jet"
    - "plasma"
    - "inferno"
    - "magma"
    - "cividis"
    - "turbo"
    - "coolwarm"
    - "spring"
    - "summer"
    - "autumn"
    - "winter"
    - "twilight"
    - "rainbow"
    - "hsv"

    Returns:
    --------
    list of tuples
        List of RGB(A) colors in [0,1] range.

    Raises:
    -------
    ValueError
        If the colormap name is not recognized.
    """
    cmap_name = name + "_r" if reverse else name  # Append "_r" to reverse colormap
    # Check if the colormap exists
    if cmap_name not in plt.colormaps():
        raise ValueError(f"Invalid colormap name '{cmap_name}'. Use one from: {list(plt.colormaps())}")

    cmap = plt.colormaps.get_cmap(cmap_name)  # Fetch the colormap
    colors = [cmap(i / (ncolors - 1)) for i in range(ncolors)]  # Normalize colors
    return [tooclear(c) if tooclearflag else c[:3] for c in colors]  # Apply tooclear if enabled
def compute_fc_profile_PBC(C, t, de, dw, he, hw, k, D, xmesh, xreltol=0)

Computes the full concentration profile, including interface concentrations, for a system with periodic boundary conditions (PBC).

This function calculates the concentrations at the east (Ce) and west (Cw) interfaces of each finite volume node, ensuring periodicity in the domain.

Parameters

C : np.ndarray, shape (num_nodes, num_timesteps)
Concentration values at each node for all time steps.
t : np.ndarray, shape (num_timesteps,)
Time points at which concentration profiles are computed.
de : np.ndarray, shape (num_nodes,)
Eastward diffusion lengths at each node.
dw : np.ndarray, shape (num_nodes,)
Westward diffusion lengths at each node.
he : np.ndarray, shape (num_nodes,)
Eastward mass transfer coefficients.
hw : np.ndarray, shape (num_nodes,)
Westward mass transfer coefficients.
k : np.ndarray, shape (num_nodes,)
Partition coefficients at each node.
D : np.ndarray, shape (num_nodes,)
Diffusion coefficients at each node.
xmesh : np.ndarray, shape (num_nodes,)
Spatial positions of the mesh points.
xreltol : float, optional, default=0
Relative tolerance applied to spatial positions to adjust interface locations.

Returns

xfull : np.ndarray, shape (3 * num_nodes,)
Full spatial positions including center nodes and interface positions.
Cfull : np.ndarray, shape (num_timesteps, 3 * num_nodes)
Full concentration profiles, including node and interface values.

Notes

  • This function enforces periodic boundary conditions by shifting indices in C and k.
  • Concentrations at the interfaces (Ce and Cw) are computed using the finite volume approach.
  • The result Cfull contains interleaved values: [Cw, C, Ce] for each node.

Example

xfull, Cfull = compute_fc_profile_PBC(C, t, de, dw, he, hw, k, D, xmesh)
Expand source code
def compute_fc_profile_PBC(C, t, de, dw, he, hw, k, D, xmesh, xreltol=0):
    """
    Computes the full concentration profile, including interface concentrations,
    for a system with periodic boundary conditions (PBC).

    This function calculates the concentrations at the east (`Ce`) and west (`Cw`)
    interfaces of each finite volume node, ensuring periodicity in the domain.

    Parameters
    ----------
    C : np.ndarray, shape (num_nodes, num_timesteps)
        Concentration values at each node for all time steps.
    t : np.ndarray, shape (num_timesteps,)
        Time points at which concentration profiles are computed.
    de : np.ndarray, shape (num_nodes,)
        Eastward diffusion lengths at each node.
    dw : np.ndarray, shape (num_nodes,)
        Westward diffusion lengths at each node.
    he : np.ndarray, shape (num_nodes,)
        Eastward mass transfer coefficients.
    hw : np.ndarray, shape (num_nodes,)
        Westward mass transfer coefficients.
    k : np.ndarray, shape (num_nodes,)
        Partition coefficients at each node.
    D : np.ndarray, shape (num_nodes,)
        Diffusion coefficients at each node.
    xmesh : np.ndarray, shape (num_nodes,)
        Spatial positions of the mesh points.
    xreltol : float, optional, default=0
        Relative tolerance applied to spatial positions to adjust interface locations.

    Returns
    -------
    xfull : np.ndarray, shape (3 * num_nodes,)
        Full spatial positions including center nodes and interface positions.
    Cfull : np.ndarray, shape (num_timesteps, 3 * num_nodes)
        Full concentration profiles, including node and interface values.

    Notes
    -----
    - This function enforces periodic boundary conditions by shifting indices in `C` and `k`.
    - Concentrations at the interfaces (`Ce` and `Cw`) are computed using the finite volume approach.
    - The result `Cfull` contains interleaved values: [Cw, C, Ce] for each node.

    Example
    -------
    ```python
    xfull, Cfull = compute_fc_profile_PBC(C, t, de, dw, he, hw, k, D, xmesh)
    ```
    """

    num_nodes, num_timesteps = C.shape  # Extract dimensions

    # Pre-calculate shifted indices for periodic BC
    east_shift = np.roll(np.arange(num_nodes), -1)  # Shift left (next node)
    west_shift = np.roll(np.arange(num_nodes), 1)   # Shift right (previous node)

    # Get shifted concentrations and diffusion coefficients
    C_east = C[east_shift, :]  # Shape (num_nodes, num_timesteps)
    C_west = C[west_shift, :]  # Shape (num_nodes, num_timesteps)
    k_east = k[east_shift][:, None]  # Make it broadcastable (num_nodes, 1)
    k_west = k[west_shift][:, None]  # Make it broadcastable (num_nodes, 1)

    # Eastern interface concentrations (vectorized)
    Ce = C - (de[:, None] * he[:, None] * ((k[:, None] / k_east) * C - C_east) / D[:, None])

    # Western interface concentrations (vectorized)
    Cw = C + (dw[:, None] * hw[:, None] * ((k_west / k[:, None]) * C_west - C) / D[:, None])

    # Create full concentration matrix with interfaces
    Cfull = np.empty((num_timesteps, 3*num_nodes),dtype=np.float64)

    # Compute positional tolerances
    xtol = np.min([np.min(de), np.min(dw)]) * xreltol
    xw = xmesh - dw + xtol  # Shifted west positions
    xe = xmesh + de - xtol  # Shifted east positions

    # Interleave values: West, Center, East
    Cfull[:, ::3] = Cw.T  # Ensure correct alignment
    Cfull[:, 1::3] = C.T
    Cfull[:, 2::3] = Ce.T

    # Create full position vector
    xfull = np.empty(3*num_nodes,dtype=np.float64)
    xfull[::3] = xw
    xfull[1::3] = xmesh
    xfull[2::3] = xe

    return xfull, Cfull
def compute_fv_profile(xmesh, dw, de, C_dimless, k_mesh, D_mesh, hw, he, CF_dimless, k0, Fo_int, xreltol=0)

Compute the full finite-volume concentration profile, including node values and interface values. (this function is not nested inside senspantar for better readability)

Parameters

xmesh (np.ndarray): Node positions. dw (np.ndarray): Distance to west interfaces. de (np.ndarray): Distance to east interfaces. C_dimless (np.ndarray): Concentration at nodes. k_mesh (np.ndarray): Henri-like coefficient at nodes. D_mesh (np.ndarray): Diffusion coefficient at nodes. hw (np.ndarray): Conductance to west interface. he (np.ndarray): Conductance to east interface. CF_dimless (np.ndarray): Far-field (Food) concentration values. k0 (float): Partition coefficient at the boundary. Fo_int (np.ndarray): Time steps. xreltol (float, optional): Relative perturbation factor for interpolation accuracy. Defaults to 0.

Returns

xfull (np.ndarray): Full spatial positions including nodes and interfaces. Cfull_dimless (np.ndarray): Full concentration profile.

Expand source code
def compute_fv_profile(xmesh, dw, de, C_dimless, k_mesh, D_mesh, hw, he, CF_dimless, k0, Fo_int, xreltol=0):
    """
    Compute the full finite-volume concentration profile, including node values and interface values.
    (this function is not nested inside senspantar for better readability)

    Parameters:
        xmesh (np.ndarray): Node positions.
        dw (np.ndarray): Distance to west interfaces.
        de (np.ndarray): Distance to east interfaces.
        C_dimless (np.ndarray): Concentration at nodes.
        k_mesh (np.ndarray): Henri-like coefficient at nodes.
        D_mesh (np.ndarray): Diffusion coefficient at nodes.
        hw (np.ndarray): Conductance to west interface.
        he (np.ndarray): Conductance to east interface.
        CF_dimless (np.ndarray): Far-field (Food) concentration values.
        k0 (float): Partition coefficient at the boundary.
        Fo_int (np.ndarray): Time steps.
        xreltol (float, optional): Relative perturbation factor for interpolation accuracy. Defaults to 0.

    Returns:
        xfull (np.ndarray): Full spatial positions including nodes and interfaces.
        Cfull_dimless (np.ndarray): Full concentration profile.
    """
    num_nodes, num_timesteps = C_dimless.shape  # Extract shape

    # Compute xtol based on minimum interface distances
    xtol = np.min([np.min(de), np.min(dw)]) * xreltol

    # Adjust west and east interface positions
    xw = xmesh - dw + xtol  # Shift west interface
    xe = xmesh + de - xtol  # Shift east interface

    # Build full spatial profile
    xfull = np.empty(3 * num_nodes,dtype=np.float64)
    xfull[::3] = xw      # Every 3rd position is xw
    xfull[1::3] = xmesh  # Every 3rd position (offset by 1) is xmesh
    xfull[2::3] = xe     # Every 3rd position (offset by 2) is xe

    # Initialize concentration at interfaces
    Ce = np.zeros_like(C_dimless)  # East interfaces
    Cw = np.zeros_like(C_dimless)  # West interfaces

    # Compute Ce (east interface) for all timesteps at once
    Ce[:-1, :] = C_dimless[:-1, :] - (
        (de[:-1, None] * he[:-1, None] *
        ((k_mesh[:-1, None] / k_mesh[1:, None]) * C_dimless[:-1, :] - C_dimless[1:, :]))
        / D_mesh[:-1, None]
    )
    Ce[-1, :] = C_dimless[-1, :]  # Last node follows boundary condition

    # Compute Cw (west interface) for all timesteps at once
    Cw[1:, :] = C_dimless[1:, :] + (
        (dw[1:, None] * hw[1:, None] *
        ((k_mesh[:-1, None] / k_mesh[1:, None]) * C_dimless[:-1, :] - C_dimless[1:, :]))
        / D_mesh[1:, None]
    )

    # Compute Cw[0, :] separately to handle boundary condition
    Cw[0, :] = (C_dimless[0, :] + (
        dw[0] * hw[0] *
        (k0 / k_mesh[0] * CF_dimless - C_dimless[0, :])
        / D_mesh[0]
    )).flatten()  # Ensure correct shape

    # Interleave concentration values instead of using np.hstack and reshape
    Cfull_dimless = np.empty((num_timesteps, 3 * num_nodes),dtype=np.float64)
    Cfull_dimless[:, ::3] = Cw.T      # Every 3rd column is Cw
    Cfull_dimless[:, 1::3] = C_dimless.T  # Every 3rd column (offset by 1) is C
    Cfull_dimless[:, 2::3] = Ce.T      # Every 3rd column (offset by 2) is Ce

    return xfull, Cfull_dimless
def custom_plt_figure(*args, **kwargs)

Ensure all figures are PrintableFigure.

Expand source code
def custom_plt_figure(*args, **kwargs):
    """Ensure all figures are PrintableFigure."""
    kwargs.setdefault("FigureClass", PrintableFigure)
    return original_plt_figure(*args, **kwargs)
def custom_plt_subplots(*args, **kwargs)

Ensure plt.subplots() returns a PrintableFigure.

Expand source code
def custom_plt_subplots(*args, **kwargs):
    """Ensure plt.subplots() returns a PrintableFigure."""
    kwargs.setdefault("FigureClass", PrintableFigure)
    fig, ax = original_plt_subplots(*args, **kwargs)
    return fig, ax
def is_valid_figure(fig)

Checks if fig is a valid and open Matplotlib figure.

Parameters: - fig: object to check

Returns: - bool: True if fig is a valid, open Matplotlib figure.

Expand source code
def is_valid_figure(fig):
    """
    Checks if `fig` is a valid and open Matplotlib figure.

    Parameters:
    - fig: object to check

    Returns:
    - bool: True if `fig` is a valid, open Matplotlib figure.
    """
    return isinstance(fig, Figure) and plt.fignum_exists(fig.number)
def print_figure(fig, filename='', destinationfolder='/home/olivi/natacha/python/utils', overwrite=False, dpi=300)

Save the figure in both PDF and PNG formats.

Parameters: - fig: Matplotlib figure object to be saved. - filename: str, base filename (auto-generated if empty). - destinationfolder: str, folder to save the files. - overwrite: bool, overwrite existing files. - dpi: int, resolution (default=300).

Expand source code
def print_figure(fig, filename="", destinationfolder=os.getcwd(), overwrite=False, dpi=300):
    """
    Save the figure in both PDF and PNG formats.

    Parameters:
    - fig: Matplotlib figure object to be saved.
    - filename: str, base filename (auto-generated if empty).
    - destinationfolder: str, folder to save the files.
    - overwrite: bool, overwrite existing files.
    - dpi: int, resolution (default=300).
    """
    if is_valid_figure(fig):
        print_pdf(fig, filename, destinationfolder, overwrite, dpi)
        print_png(fig, filename, destinationfolder, overwrite, dpi)
    else:
        print("no valid figure")
def print_pdf(fig, filename='', destinationfolder='/home/olivi/natacha/python/utils', overwrite=False, dpi=300)

Save a given figure as a PDF.

Parameters: - fig: Matplotlib figure object to be saved. - filename: str, PDF filename (auto-generated if empty). - destinationfolder: str, folder to save the file. - overwrite: bool, overwrite existing file. - dpi: int, resolution (default=300).

Expand source code
def print_pdf(fig, filename="", destinationfolder=os.getcwd(), overwrite=False, dpi=300):
    """
    Save a given figure as a PDF.

    Parameters:
    - fig: Matplotlib figure object to be saved.
    - filename: str, PDF filename (auto-generated if empty).
    - destinationfolder: str, folder to save the file.
    - overwrite: bool, overwrite existing file.
    - dpi: int, resolution (default=300).
    """
    if not is_valid_figure(fig):
        print("no valid figure")
        return
    # Generate filename if not provided
    if not filename:
        filename = _generate_figname(fig, ".pdf")
    # Ensure full path
    filename = os.path.join(destinationfolder, filename)
    # Prevent overwriting unless specified
    if not overwrite and os.path.exists(filename):
        print(f"File {filename} already exists. Use overwrite=True to replace it.")
        return
    # Save figure as PDF
    fig.savefig(filename, format="pdf", dpi=dpi, bbox_inches="tight")
    print(f"Saved PDF: {filename}")
def print_png(fig, filename='', destinationfolder='/home/olivi/natacha/python/utils', overwrite=False, dpi=300)

Save a given figure as a PNG.

Parameters: - fig: Matplotlib figure object to be saved. - filename: str, PNG filename (auto-generated if empty). - destinationfolder: str, folder to save the file. - overwrite: bool, overwrite existing file. - dpi: int, resolution (default=300).

Expand source code
def print_png(fig, filename="", destinationfolder=os.getcwd(), overwrite=False, dpi=300):
    """
    Save a given figure as a PNG.

    Parameters:
    - fig: Matplotlib figure object to be saved.
    - filename: str, PNG filename (auto-generated if empty).
    - destinationfolder: str, folder to save the file.
    - overwrite: bool, overwrite existing file.
    - dpi: int, resolution (default=300).
    """
    if not is_valid_figure(fig):
        print("no valid figure")
        return
    # Generate filename if not provided
    if not filename:
        filename = _generate_figname(fig, ".png")
    # Ensure full path
    filename = os.path.join(destinationfolder, filename)
    # Prevent overwriting unless specified
    if not overwrite and os.path.exists(filename):
        print(f"File {filename} already exists. Use overwrite=True to replace it.")
        return
    # Save figure as PNG
    fig.savefig(filename, format="png", dpi=dpi, bbox_inches="tight")
    print(f"Saved PNG: {filename}")
def rgb()

Displays a categorized color chart with properly aligned headers.

Expand source code
def rgb():
    """Displays a categorized color chart with properly aligned headers."""
    ncols = len(COLOR_CATEGORIES)
    max_rows = max(len(colors) + spacing for _, colors, spacing in COLOR_CATEGORIES)
    fig, ax = plt.subplots(figsize=(ncols * 2.5, max_rows * 0.6))
    ax.set_xticks([])
    ax.set_yticks([])
    ax.set_frame_on(False)
    x_spacing = 1.8  # Horizontal spacing between columns
    y_spacing = 1.0  # Vertical spacing between color patches
    text_size = 13   # Increased text size by 50%
    for col_idx, (category, colors, extra_space) in enumerate(COLOR_CATEGORIES):
        y_pos = max_rows  # Start at the top
        ax.text(col_idx * x_spacing + (x_spacing - 0.2) / 2, y_pos + 1.2, category,
                fontsize=text_size + 2, fontweight='bold', ha='center')
        y_pos -= y_spacing  # Move down after title
        for color in colors:
            if color == "":  # Empty string is a spacer
                y_pos -= y_spacing * 0.5
                continue
            hexval = CSS_COLORS.get(color.lower(), "white")
            y_pos -= y_spacing  # Move down before drawing
            ax.add_patch(plt.Rectangle((col_idx * x_spacing, y_pos), x_spacing - 0.2, y_spacing - 0.2, facecolor=hexval))
            r, g, b = mcolors.to_rgb(hexval)
            brightness = (r + g + b) / 3
            text_color = 'white' if brightness < 0.5 else 'black'
            ax.text(col_idx * x_spacing + (x_spacing - 0.2) / 2, y_pos + y_spacing / 2, color, ha='center',
                    va='center', fontsize=text_size, color=text_color)
        y_pos -= extra_space * y_spacing
    ax.set_xlim(-0.5, ncols * x_spacing)
    ax.set_ylim(-0.5, max_rows * y_spacing + 2)
    plt.tight_layout()
    plt.show()
def senspatankar(multilayer=None, medium=None, name='senspatantkar:7ScZGK', description='', t=None, autotime=True, timescale='sqrt', Cxprevious=None, ntimes=1000.0, RelTol=1e-06, AbsTol=1e-06, container=None)

Simulates in 1D the mass transfer of a substance initially distributed in a multilayer packaging structure into a food medium (or liquid medium). This solver uses a finite-volume method adapted from Patankar to handle partition coefficients between all layers, and between the food and the contact layer.

Two typical configurations are implemented:

Configuration (PBC=False) - Robin (third-kind boundary condition) on the left (in contact with food) - Impervious boundary condition on the right (in contact with surrounding)

Configuration (PBC=true) - periodic boundary condition between left and right to simulate infinite stacking or setoff

The configuration nofood is a variant of PBC=False with h=Bi=0 (impervious boundary condition on the left).

The behavior of the solver is decided by medium attributes (see food.py module). The property medium.PBC will determine whether periodic boundary conditions are used or not.

Parameters

multilayer : layer
A layer (or combined layers) object describing the packaging.
medium : foodlayer or foodphysics
A foodlayer object describing the food (or liquid) medium in contact.
name : str, optional
Simulation name, default = f"senspatantkar:{autoname(6)}" where autoname(6) is a random sequence of characters a-z A-Z 0-9
description : str, optional
Simulation description
t : float or array_like, optional
If a float is provided, it is taken as the total contact duration in seconds. If an array is provided, it is assumed to be time points where the solution will be evaluated. If None, it defaults to the contact time from the medium.
autotime : bool, optional
If True (default), an automatic time discretization is generated internally (linear or sqrt-based) between 0 and tmax (the maximum time). If False, the times in t are used directly.
timescale : {"sqrt", "linear"}, optional
Type of automatic time discretization if autotime=True. "sqrt" (default) refines the early times more (useful for capturing rapid changes). "linear" uses a regular spacing.
Cxprevious : Cprofile, optional (default=None)
Concentration profile (from a previous simulation).
ntimes : int, optional
Number of time points in the automatically generated time vector if autotime=True. The default is 1e3.
RelTol : float, optional
Relative tolerance for the ODE solver (solve_ivp). Default is 1e-4.
AbsTol : float, optional
Absolute tolerance for the ODE solver (solve_ivp). Default is 1e-4.

Raises

TypeError
If multilayer is not a layer instance or medium is not a foodlayer instance, or if timescale is not a string.
ValueError
If an invalid timescale is given (not one of {"sqrt", "linear"}).

Returns

SensPatankarResult
An object containing the time vector, concentration histories, fluxes, and spatial concentration profiles suitable for plotting and analysis.

Notes

  • The geometry is assumed 1D: Food is on the left boundary, with a mass transfer coefficient h = medium.h, partition ratio k0 = medium.k0, and the packaging layers are to the right up to an impervious boundary.
  • Results are normalized internally using a reference layer (iref) specified in multilayer. The reference layer is used to define dimensionless time (Fourier number Fo).
  • The dimensionless solution is solved by the Patankar approach with partition coefficients.

Example

.. code-block:: python

from patankar.food import ethanol
from patankar.layer import layer
medium = ethanol()
A = layer(layername="layer A")
B = layer(layername="layer B")
multilayer = A + B

sol = senspatankar(multilayer, medium, autotime=True, timescale="sqrt")
sol.plotCF()
sol.plotC()
Expand source code
def senspatankar(multilayer=None, medium=None,
                 name=f"senspatantkar:{autoname(6)}", description="",
                 t=None, autotime=True, timescale="sqrt", Cxprevious=None,
                 ntimes=1e3, RelTol=1e-6, AbsTol=1e-6,
                 container=None):
    """
    Simulates in 1D the mass transfer of a substance initially distributed in a multilayer
    packaging structure into a food medium (or liquid medium). This solver uses a finite-volume
    method adapted from Patankar to handle partition coefficients between all layers, and
    between the food and the contact layer.

    Two typical configurations are implemented:

    Configuration (PBC=False)
        - Robin (third-kind boundary condition) on the left (in contact with food)
        - Impervious boundary condition on the right (in contact with surrounding)

    Configuration (PBC=true)
        - periodic boundary condition between left and right to simulate infinite stacking or setoff

    The configuration nofood is a variant of PBC=False with h=Bi=0 (impervious boundary condition on the left).

    The behavior of the solver is decided by medium attributes (see food.py module).
    The property medium.PBC will determine whether periodic boundary conditions are used or not.


    Parameters
    ----------
    multilayer : layer
        A ``layer`` (or combined layers) object describing the packaging.
    medium : foodlayer or foodphysics
        A ``foodlayer`` object describing the food (or liquid) medium in contact.
    name : str, optional
        Simulation name, default = f"senspatantkar:{autoname(6)}" where autoname(6)
        is a random sequence of characters a-z A-Z 0-9
    description : str, optional
        Simulation description
    t : float or array_like, optional
        If a float is provided, it is taken as the total contact duration in seconds.
        If an array is provided, it is assumed to be time points where the solution
        will be evaluated. If None, it defaults to the contact time from the medium.
    autotime : bool, optional
        If True (default), an automatic time discretization is generated internally
        (linear or sqrt-based) between 0 and tmax (the maximum time). If False, the
        times in ``t`` are used directly.
    timescale : {"sqrt", "linear"}, optional
        Type of automatic time discretization if ``autotime=True``.
        "sqrt" (default) refines the early times more (useful for capturing rapid changes).
        "linear" uses a regular spacing.
    Cxprevious : Cprofile, optional (default=None)
        Concentration profile (from a previous simulation).
    ntimes : int, optional
        Number of time points in the automatically generated time vector if ``autotime=True``.
        The default is 1e3.
    RelTol : float, optional
        Relative tolerance for the ODE solver (``solve_ivp``). Default is 1e-4.
    AbsTol : float, optional
        Absolute tolerance for the ODE solver (``solve_ivp``). Default is 1e-4.

    Raises
    ------
    TypeError
        If ``multilayer`` is not a ``layer`` instance or ``medium`` is not a ``foodlayer`` instance,
        or if ``timescale`` is not a string.
    ValueError
        If an invalid ``timescale`` is given (not one of {"sqrt", "linear"}).

    Returns
    -------
    SensPatankarResult
        An object containing the time vector, concentration histories, fluxes, and
        spatial concentration profiles suitable for plotting and analysis.

    Notes
    -----
    - The geometry is assumed 1D: Food is on the left boundary, with a mass transfer coefficient
      `h = medium.h`, partition ratio `k0 = medium.k0`, and the packaging layers are to the right
      up to an impervious boundary.
    - Results are normalized internally using a reference layer (``iref``) specified in ``multilayer``.
      The reference layer is used to define dimensionless time (Fourier number Fo).
    - The dimensionless solution is solved by the Patankar approach with partition coefficients.

    Example
    -------
    .. code-block:: python

        from patankar.food import ethanol
        from patankar.layer import layer
        medium = ethanol()
        A = layer(layername="layer A")
        B = layer(layername="layer B")
        multilayer = A + B

        sol = senspatankar(multilayer, medium, autotime=True, timescale="sqrt")
        sol.plotCF()
        sol.plotC()
    """

    # Check arguments
    if not isinstance(multilayer, layer):
        raise TypeError(f"the input multilayer must be of class layer, not {type(multilayer).__name__}")
    if not isinstance(medium, (foodlayer,foodphysics)):
        raise TypeError(f"the input medium must be of class foodlayer, not {type(medium).__name__}")
    if not isinstance(timescale, str):
        raise TypeError(f"timescale must be a string, not {type(timescale).__name__}")

    # Refresh the physics of medium for parameters tunned by the end-user
    medium.refresh()

    # extract the PBC flag (True for setoff)
    PBC = medium.PBC

    # Restart file initialization (all parameters are saved - and cannot be changed)
    restart = restartfile_senspantakar(multilayer, medium, name,
            description, t, autotime, timescale, Cxprevious, ntimes, RelTol, AbsTol,deepcopy=True)
    # Restart file (unsecure version without deepcoy)
    restart_unsecure = restartfile_senspantakar(multilayer, medium, name,
            description, t, autotime, timescale, Cxprevious, ntimes, RelTol, AbsTol,deepcopy=False)

    # Contact medium properties
    CF0 = medium.get_param("CF0",0) # instead of medium.CF0 to get a fallback mechanism with nofood and setoff
    k0 = medium.get_param("k0",1)
    h = medium.get_param("h",0,acceptNone=False) # None will arise for PBC
    ttarget = medium.get_param("contacttime") # <-- ttarget is the time requested
    tmax = 2 * ttarget  # ensures at least up to 2*contacttime

    # Material properties
    k = multilayer.k / k0   # all k are normalized
    k0 = k0 / k0            # all k are normalized
    D = multilayer.D
    l = multilayer.l
    C0 = multilayer.C0

    # Validate/prepare time array
    if isinstance(t,tuple):
        t = check_units(t)[0]
    t = np.array(tmax if t is None else t, dtype=float) # <-- simulation time (longer than ttarget)
    if np.isscalar(t) or t.size == 1:
        t = np.array([0, t.item()],dtype=float)
    if t[0] != 0:
        t = np.insert(t, 0, 0)  # Ensure time starts at zero
    # Ensure t[-1] is greater than ttarget
    if t[-1] < ttarget.item():  # Convert ttarget to scalar before comparison
        t = np.append(t, [ttarget, 1.05*ttarget, 1.1*ttarget, 1.2*ttarget])  # Extend time array to cover requested time

    # Reference layer for dimensionless transformations
    iref = multilayer.referencelayer
    l_ref = l[iref]
    D_ref = D[iref]

    # Normalize lengths and diffusivities
    l_normalized = l / l_ref
    D_normalized = D / D_ref

    # Dimensionless time (Fourier number)
    timebase = l_ref**2 / D_ref
    Fo = t / timebase

    # Automatic time discretization if requested
    if autotime:
        if timescale.lower() == "linear":
            Fo_int = np.linspace(np.min(Fo), np.max(Fo), int(ntimes))
        elif timescale.lower() == "sqrt":
            Fo_int = np.linspace(np.sqrt(np.min(Fo)), np.sqrt(np.max(Fo)), int(ntimes))**2
        else:
            raise ValueError('timescale can be "sqrt" or "linear"')
        t = Fo_int * timebase
    else:
        Fo_int = Fo

    # L: dimensionless ratio of packaging to food volumes (scaled by reference layer thickness)
    A = medium.get_param("surfacearea",0)
    l_sum = multilayer.thickness
    VP = A * l_sum
    VF = medium.get_param("volume",1)
    LPF = VP / VF
    L = LPF * l_ref / l_sum

    # Bi: dimensionless mass transfer coefficient
    Bi = h * l_ref / D_ref

    # Compute equilibrium concentration factor
    sum_lL_C0 = np.sum(l_normalized * L * C0)
    sum_terms = np.sum((1 / k) * l_normalized * L)
    C0eq = (CF0 + sum_lL_C0) / (1 + sum_terms)
    if C0eq == 0:
        C0eq = 1.0

    # Normalize initial concentrations
    C0_normalized = C0 / C0eq
    CF0_normalized = CF0 / C0eq

    # Generate mesh (add offset x0 and concatenate them)
    meshes = multilayer.mesh()
    x0 = 0
    for i,mesh in enumerate((meshes)):
        mesh.xmesh += x0
        x0 += mesh.l
    xmesh = np.concatenate([m.xmesh for m in meshes])
    total_nodes = len(xmesh)

    # Positions of the interfaces (East and West)
    dw = np.concatenate([m.dw for m in meshes])
    de = np.concatenate([m.de for m in meshes])

    # Attach properties to nodes (flat interpolant)
    D_mesh = np.concatenate([D_normalized[m.index] for m in meshes])
    k_mesh = np.concatenate([k[m.index] for m in meshes])
    C0_mesh = np.concatenate([C0_normalized[m.index] for m in meshes])

    # Interpolate the initial solution if Cxprevious is supplied
    if Cxprevious is not None:
        if not isinstance(Cxprevious,Cprofile):
            raise TypeError(f"Cxprevisous should be a Cprofile object not a {type(Cxprevious).__name__}")
        C0_mesh = Cxprevious.interp(xmesh*l_ref) / C0eq # dimensionless

    # Conductances between the node and the next interface
    # item() is forced to avoid the (1,) Shape Issue (since NumPy 1.25)
    hw = np.zeros(total_nodes)
    he = np.zeros(total_nodes)
    if PBC:
        for i in range(total_nodes):
            prev = total_nodes-1 if i==0 else i-1
            hw[i] = (1 / ((de[prev] / D_mesh[prev] * k_mesh[prev] / k_mesh[i]) + dw[i] / D_mesh[i])).item()
    else:
        hw[0] = (1 / ((1 / k_mesh[0]) / Bi + dw[0] / D_mesh[0])).item()
        for i in range(1, total_nodes):
            hw[i] = (1 / ((de[i - 1] / D_mesh[i - 1] * k_mesh[i - 1] / k_mesh[i]) + dw[i] / D_mesh[i])).item()
    he[:-1] = hw[1:] # nodes are the center of FV elements: he = np.roll(hw, -1)
    he[-1]=hw[0] if PBC else 0.0 # we connect (PBC) or we enforce impervious (note that he was initialized to 0 already)

    if PBC: # periodic boundary condition

        # Assemble sparse matrix using COO format for efficient construction
        rows = np.zeros(3 * total_nodes, dtype=int) # row indices
        cols = np.zeros_like(rows) # col indices
        data = np.zeros_like(rows, dtype=np.float64) # values
        idx = 0
        for i in range(total_nodes):
            current = i
            west = (i-1) % total_nodes
            east = (i+1) % total_nodes
            denominator = dw[current] + de[current]
            k_current = k_mesh[current]
            k_west = k_mesh[west]
            k_east = k_mesh[east]
            # West neighbor
            rows[idx] = current
            cols[idx] = west
            data[idx] = hw[current] * k_west / k_current / denominator
            idx +=1
            # Diagonal
            rows[idx] = current
            cols[idx] = current
            data[idx] = (-hw[current] - he[current] * k_current/k_east) / denominator
            idx +=1
            # East neighbor
            rows[idx] = current
            cols[idx] = east
            data[idx] = he[current] / denominator
            idx +=1
        A = coo_matrix((data[:idx], (rows[:idx], cols[:idx])),
                     shape=(total_nodes, total_nodes)).tocsr()
        C_initial =  C0_mesh

    else: # Robin (left) + impervious (right) --> triband matrix

        # Assemble the tri-band matrix A as sparse for efficiency
        size = total_nodes + 1  # +1 for the food node
        main_diag = np.zeros(size)
        upper_diag = np.zeros(size - 1)
        lower_diag = np.zeros(size - 1)
        # Food node (index 0)
        main_diag[0] = (-L * hw[0] * (1 / k_mesh[0])).item()
        upper_diag[0] = (L * hw[0]).item()
        # Layer nodes
        for i in range(total_nodes):
            denom = dw[i] + de[i]
            if i == 0:
                main_diag[1] = (-hw[0] - he[0] * k_mesh[0] / k_mesh[1]) / denom
                upper_diag[1] = he[0] / denom
                lower_diag[0] = (hw[0] * (1 / k_mesh[0])) / denom
            elif i == total_nodes - 1:
                main_diag[i + 1] = (-hw[i]) / denom
                lower_diag[i] = (hw[i] * k_mesh[i - 1] / k_mesh[i]) / denom
            else:
                main_diag[i + 1] = (-hw[i] - he[i] * k_mesh[i] / k_mesh[i + 1]) / denom
                upper_diag[i + 1] = he[i] / denom
                lower_diag[i] = (hw[i] * k_mesh[i - 1] / k_mesh[i]) / denom
        A = diags([main_diag, upper_diag, lower_diag], [0, 1, -1], shape=(size, size), format='csr')
        C_initial = np.concatenate([CF0_normalized, C0_mesh])

    # ODE system: dC/dFo = A * C
    def odesys(_, C):
        return A.dot(C)

    sol = solve_ivp(   # <-- generic solver
        odesys,        # <-- our system (efficient sparse matrices)
        [Fo_int[0], Fo_int[-1]], # <-- integration range on Fourier scale
        C_initial,     # <-- initial solution
        t_eval=Fo_int, # <-- the solution is retrieved at these Fo values
        method='BDF',  # <-- backward differences are absolutely stable
        rtol=RelTol,   # <-- relative and absolute tolerances
        atol=AbsTol
    )

    # Check solution
    if not sol.success:
        print("Solver failed:", sol.message)

    # Extract solution
    if PBC:
        CF_dimless = np.full((sol.y.shape[1],), CF0 / C0eq)
        C_dimless = sol.y
        f = np.zeros_like(CF_dimless)
    else:
        CF_dimless = sol.y[0, :]
        C_dimless = sol.y[1:, :]
        # Robin flux
        f = hw[0] * (k0 * CF_dimless - C_dimless[0, :]) * C0eq

    # Compute cumulative flux
    fc = cumulative_trapezoid(f, t, initial=0)

    if PBC:
        # Build full (dimensionless) profile for plotting across each sub-node
        xfull, Cfull_dimless = compute_fc_profile_PBC(C_dimless, Fo_int, de, dw, he, hw, k_mesh, D_mesh, xmesh, xreltol=0)
        # Build full (dimensionless) profile for interpolation across each sub-node
        xfulli, Cfull_dimlessi = compute_fc_profile_PBC(C_dimless, Fo_int, de, dw, he, hw, k_mesh, D_mesh, xmesh, xreltol=1e-4)
    else:
        # Build full (dimensionless) profile for plotting across each sub-node
        xfull, Cfull_dimless = compute_fv_profile(xmesh, dw, de,C_dimless, k_mesh, D_mesh, hw, he, CF_dimless, k0, Fo_int, xreltol=0)
        # Build full (dimensionless) profile for interpolation across each sub-node
        xfulli, Cfull_dimlessi = compute_fv_profile(xmesh, dw, de,C_dimless, k_mesh, D_mesh, hw, he, CF_dimless, k0, Fo_int, xreltol=1e-4)


    # revert to dimensional concentrations
    CF = CF_dimless * C0eq
    Cx = Cfull_dimless * C0eq

    return SensPatankarResult(
        name=name,
        description=description,
        ttarget = ttarget,             # target time
        t=t,     # time where concentrations are calculated
        C= np.trapz(Cfull_dimless, xfull, axis=1)*C0eq,
        CF=CF,
        fc=fc,
        f=f,
        x=xfull * l_ref,           # revert to dimensional lengths
        Cx=Cx,
        tC=sol.t,
        C0eq=C0eq,
        timebase=timebase,
        restart=restart, # <--- restart info (inputs only)
        restart_unsecure=restart_unsecure,
        xi=xfulli*l_ref, # for restart only
        Cxi=Cfull_dimlessi*C0eq, # for restart only
        createcontainer = True,
        container=container
    )
def tooclear(color, threshold=0.6, correction=0.15)

Darkens a too-bright RGB(A) color tuple.

Parameters:

color : tuple (3 or 4 elements) RGB or RGBA color in [0,1] range. threshold : float, optional (default=0.6) Grayscale threshold above which colors are considered too bright. correction : float, optional (default=0.15) Amount by which to darken too bright colors.

Returns:

tuple Adjusted RGB(A) color tuple with too bright colors darkened.

Example:

corrected_color = tooclear((0.9, 0.9, 0.7, 1.0))

Expand source code
def tooclear(color, threshold=0.6, correction=0.15):
    """
    Darkens a too-bright RGB(A) color tuple.

    Parameters:
    -----------
    color : tuple (3 or 4 elements)
        RGB or RGBA color in [0,1] range.
    threshold : float, optional (default=0.6)
        Grayscale threshold above which colors are considered too bright.
    correction : float, optional (default=0.15)
        Amount by which to darken too bright colors.

    Returns:
    --------
    tuple
        Adjusted RGB(A) color tuple with too bright colors darkened.

    Example:
    --------
    corrected_color = tooclear((0.9, 0.9, 0.7, 1.0))
    """
    if not isinstance(color, tuple) or len(color) not in [3, 4]:
        raise ValueError("Input must be an RGB or RGBA tuple.")
    rgb = color[:3]  # Extract RGB values
    # Compute grayscale brightness (mean of RGB channels)
    brightness = sum(rgb) / 3
    # Darken if brightness exceeds the threshold
    if brightness > threshold:
        rgb = tuple(max(0, c - correction) for c in rgb)
    return rgb + (color[3],) if len(color) == 4 else rgb  # Preserve alpha if present

Classes

class CFSimulationContainer (name='', description='')

Container to store and compare multiple CF results from different simulations.

Attributes

curves : dict
Stores CF results with unique keys. Each entry contains: - 'label': Label used for legend. - 'tmin', 'tmax': Time range of the simulation. - 'interpolant': Interpolated CF function (if continuous). - 'times': Discrete time points (if discrete). - 'values': Discrete CF values (if discrete). - 'color': Assigned color for plotting. - 'linestyle': Line style (default is '-'). - 'linewidth': Line width (default is 2). - 'marker': Marker style for discrete data. - 'markerfacecolor': Marker face color. - 'markersize': Marker size. - 'discrete': Boolean indicating discrete data.

Initialize an empty container for CF results.

Expand source code
class CFSimulationContainer:
    """
    Container to store and compare multiple CF results from different simulations.

    Attributes
    ----------
    curves : dict
        Stores CF results with unique keys. Each entry contains:
        - 'label': Label used for legend.
        - 'tmin', 'tmax': Time range of the simulation.
        - 'interpolant': Interpolated CF function (if continuous).
        - 'times': Discrete time points (if discrete).
        - 'values': Discrete CF values (if discrete).
        - 'color': Assigned color for plotting.
        - 'linestyle': Line style (default is '-').
        - 'linewidth': Line width (default is 2).
        - 'marker': Marker style for discrete data.
        - 'markerfacecolor': Marker face color.
        - 'markersize': Marker size.
        - 'discrete': Boolean indicating discrete data.
    """

    def __init__(self,name="",description=""):
        """Initialize an empty container for CF results."""
        self.curves = {}
        self._name = name
        self._description = description
        self._plotconfig = plotconfig

    @property
    def name(self):
        return self._name or autoname(6)

    @property
    def description(self):
        return self._description or f"comparison of {len(self.curves)} curves"


    def add(self, simulation_result, label=None, color=None, linestyle="-", linewidth=2,
            marker='o', markerfacecolor='auto', markeredgecolor='black', markersize=6, discrete=False):
        """
        Add a CF result to the container.

        Parameters
        ----------
        simulation_result : SensPatankarResult
            The simulation result.
        discrete : bool, optional
            Whether the data is discrete.
        """
        if not isinstance(simulation_result, SensPatankarResult):
            raise TypeError(f"Expected SensPatankarResult, got {type(simulation_result).__name__}")
        label = label or f"plot{len(self.curves) + 1}"
        key = label[:80]
        if color is None:
            cmap = cm.get_cmap("tab10", len(self.curves) + 1)
            color = cmap(len(self.curves) % 10)
        if markerfacecolor == 'auto':
            markerfacecolor = color
        self.curves[key] = {
            "label": label,
            "color": color,
            "linestyle": linestyle,
            "linewidth": linewidth,
            "marker": marker,
            "markerfacecolor": markerfacecolor,
            "markeredgecolor": markeredgecolor,
            "markersize": markersize,
            "discrete": discrete
        }
        if discrete:
            self.curves[key].update({
                "times": simulation_result.t,
                "values": simulation_result.CF
            })
        else:
            self.curves[key].update({
                "tmin": simulation_result.t.min(),
                "tmax": simulation_result.t.max(),
                "interpolant": simulation_result.interp_CF
            })

    def delete(self, identifier):
        """
        Remove a stored curve by its index (int) or label (str).

        Parameters
        ----------
        identifier : int or str
            - If `int`, removes the curve at the specified index.
            - If `str`, removes the curve with the matching label.
        """
        if isinstance(identifier, int):
            key = self._get_key_by_index(identifier)
        elif isinstance(identifier, str):
            key = identifier[:40]  # Match the label-based key
            if key not in self.curves:
                print(f"No curve found with label '{identifier}'")
                return
        else:
            raise TypeError("Identifier must be an integer (index) or a string (label).")

    def __repr__(self):
        """Return a summary of stored CF curves including index numbers."""
        if not self.curves:
            return "<CFSimulationContainer: No stored curves>"
        repr_str = "<CFSimulationContainer: Stored CF Curves>\n"
        repr_str += "--------------------------------------------------\n"
        for index, (key, data) in enumerate(self.curves.items()):
            repr_str += (f"[{index}] Label: {data['label']} | "
                         f"Time: [{data['tmin']:.2e}, {data['tmax']:.2e}] s | "
                         f"Color: {data['color']} | "
                         f"Style: {data['linestyle']} | "
                         f"Width: {data['linewidth']}\n")
        return repr_str

    def _validate_indices(self, indices):
        """Helper function to check if indices are valid."""
        if isinstance(indices, int):
            indices = [indices]
        if not all(isinstance(i, int) and 0 <= i < len(self.curves) for i in indices):
            raise IndexError(f"Invalid index in {indices}. Must be between 0 and {len(self.curves) - 1}.")
        return indices

    def _get_keys_by_indices(self, indices):
        """Helper function to retrieve keys based on indices."""
        if isinstance(indices, (int, str)):
            indices = [indices]
        keys = []
        all_keys = list(self.curves.keys())
        for idx in indices:
            if isinstance(idx, int):
                if idx < 0:
                    idx += len(all_keys)
                if idx < 0 or idx >= len(all_keys):
                    raise IndexError(f"Index {idx} is out of range for curves.")
                keys.append(all_keys[idx])
            elif isinstance(idx, str):
                if idx not in self.curves:
                    raise KeyError(f"Key '{idx}' does not exist in curves.")
                keys.append(idx)
            else:
                raise TypeError("Index must be an int, str, or a list of both.")
        return keys

    def update(self, index, label=None, linestyle=None, linewidth=None, color=None,
               marker=None, markersize=None, markerfacecolor=None, markeredgecolor=None):
        """
        Update properties of one or multiple curves.

        Parameters
        ----------
        index : int or list of int
            Index or indices of the curve(s) to update.
        label : str, optional
            New label for the curve(s).
        linestyle : str, optional
            New linestyle for the curve(s).
        linewidth : float, optional
            New linewidth for the curve(s).
        color : str or tuple, optional
            New color for the curve(s).
        marker : str, optional
            New marker style for discrete data.
        markersize : float, optional
            New marker size for discrete data.
        markerfacecolor : str or tuple, optional
            New marker face color.
        markeredgecolor : str or tuple, optional
            New marker edge color.
        """
        keys = self._get_keys_by_indices(index)

        for key in keys:
            if label is not None:
                self.curves[key]["label"] = label
            if linestyle is not None:
                self.curves[key]["linestyle"] = linestyle
            if linewidth is not None:
                self.curves[key]["linewidth"] = linewidth
            if color is not None:
                self.curves[key]["color"] = color
            if marker is not None:
                self.curves[key]["marker"] = marker
            if markersize is not None:
                self.curves[key]["markersize"] = markersize
            if markerfacecolor is not None:
                self.curves[key]["markerfacecolor"] = markerfacecolor
            if markeredgecolor is not None:
                self.curves[key]["markeredgecolor"] = markeredgecolor


    def label(self, index, new_label):
        """Change the label of one or multiple curves."""
        self.update(index, label=new_label)

    def linewidth(self, index, new_value):
        """Change the linewidth of one or multiple curves."""
        self.update(index, linewidth=new_value)

    def linestyle(self, index, new_style):
        """Change the linestyle of one or multiple curves."""
        self.update(index, linestyle=new_style)

    def color(self, index, new_color):
        """Change the color of one or multiple curves."""
        self.update(index, color=new_color)

    def marker(self, index, new_marker):
        """Change the marker style of one or multiple curves."""
        self.update(index, marker=new_marker)

    def markersize(self, index, new_size):
        """Change the marker size of one or multiple curves."""
        self.update(index, markersize=new_size)

    def markerfacecolor(self, index, new_facecolor):
        """Change the marker face color of one or multiple curves."""
        self.update(index, markerfacecolor=new_facecolor)

    def markeredgecolor(self, index, new_edgecolor):
        """Change the marker edge color of one or multiple curves."""
        self.update(index, markeredgecolor=new_edgecolor)

    def colormap(self, name="viridis", ncolors=16, tooclearflag=True, reverse=False):
        """
        Generates a list of `ncolors` colors from the specified colormap.

        Parameters:
        -----------
        name : str, optional (default="viridis")
            Name of the Matplotlib colormap to use.
        ncolors : int, optional (default=16)
            Number of colors to generate.
        tooclearflag : bool, optional (default=True)
            If True, applies `tooclear` function to adjust brightness.
        reverse : bool, optional (default=False)
            If True, reverses the colormap.

        Returns:
        --------
        list of tuples
            List of RGB(A) colors in [0,1] range.

        Raises:
        -------
        ValueError
            If the colormap name is not recognized.
        """
        return colormap(name, ncolors, tooclear, reverse)

    def viridis(self, ncolors=16, tooclear=True, reverse=False):
        """Generates colors from the Viridis colormap."""
        return colormap("viridis", ncolors, tooclear, reverse)

    def jet(self, ncolors=16, tooclear=True, reverse=False):
        """Generates colors from the Jet colormap."""
        return colormap("jet", ncolors, tooclear, reverse)


    def plotCF(self, t_range=None):
        """
        Plot all stored CF curves in a single figure.

        Parameters
        ----------
        t_range : tuple (t_min, t_max), optional
            Time range for plotting. If None, uses each curve's own range.
        plotconfig : dict, optional
            Dictionary with plotting configuration, containing:
            - "tunit": Time unit label (e.g., 's').
            - "Cunit": Concentration unit label (e.g., 'mg/L').
            - "tscale": Time scaling factor.
            - "Cscale": Concentration scaling factor.
        """
        plt.rc('text', usetex=True) # Enable LaTeX formatting for Matplotlib
        # extract plotconfig
        plotconfig = self._plotconfig

        if not self.curves:
            print("No curves to plot.")
            return

        fig, ax = plt.subplots(figsize=(8, 6))

        for data in self.curves.values():
            if data["discrete"]:
                # Discrete data plotting
                ax.scatter(data["times"], data["values"], label=data["label"],
                           color=data["color"], marker=data["marker"],
                           facecolor=data["markerfacecolor"], edgecolor=data["markeredgecolor"],
                           s=data["markersize"]**2)
            else:
                # Continuous data plotting
                t_min, t_max = data["tmin"], data["tmax"]
                if t_range:
                    t_min, t_max = max(t_min, t_range[0]), min(t_max, t_range[1])

                t_plot = np.linspace(t_min, t_max, 500)
                CF_plot = data["interpolant"](t_plot)
                ax.plot(t_plot, CF_plot, label=data["label"],
                        color=data["color"], linestyle=data["linestyle"], linewidth=data["linewidth"])

        # Configure the plot
        ax.set_xlabel(f'Time [{plotconfig["tunit"]}]' if plotconfig else "Time")
        ax.set_ylabel(f'Concentration in Food [{plotconfig["Cunit"]}]' if plotconfig else "CF")
        title_main = "Concentration in Food vs. Time"
        title_sub = rf"$\bf{{{self.name}}}$" + (f": {self.description}" if self.description else "")
        ax.set_title(f"{title_main}\n{title_sub}", fontsize=10)
        ax.text(0.5, 1.05, title_sub, fontsize=8, ha="center", va="bottom", transform=ax.transAxes)
        ax.set_title(title_main)
        ax.legend()
        ax.grid(True)
        plt.show()
        # store metadata
        setattr(fig,_fig_metadata_atrr_,f"cmp_pltCF_{self.name}")
        return fig


    def to_dataframe(self, t_range=None, num_points=1000, time_list=None):
        """
        Export interpolated CF data as a pandas DataFrame.
        Parameters:
        - t_range: tuple (t_min, t_max), optional
            The time range for interpolation (default: min & max of all stored results).
        - num_points: int, optional
            Number of points in the interpolated time grid (default: 100).
        - time_list: list or array, optional
            Explicit list of time points for interpolation (overrides t_range & num_points).
        Returns:
        - pd.DataFrame
            A DataFrame with time as index and CF values as columns (one per simulation).
        """
        if not self.curves:
            print("No data to export.")
            return pd.DataFrame()

        # Determine the time grid
        if time_list is not None:
            t_grid = np.array(time_list)
        else:
            all_t_min = min(data["tmin"] for data in self.curves.values())
            all_t_max = max(data["tmax"] for data in self.curves.values())
            # Default time range
            t_min, t_max = t_range if t_range else (all_t_min, all_t_max)
            # Create evenly spaced time grid
            t_grid = np.linspace(t_min, t_max, num_points)
        # Create DataFrame with time as index
        df = pd.DataFrame({"Time (s)": t_grid})

        # Interpolate each stored CF curve at the common time grid
        for key, data in self.curves.items():
            df[data["label"]] = data["interpolant"](t_grid)
        return df


    def save_as_excel(self, filename="CF_data.xlsx", destinationfolder=os.getcwd(), overwrite=False,
                      t_range=None, num_points=1000, time_list=None):
        """
        Save stored CF data to an Excel file.
        Parameters:
        - filename: str, Excel filename.
        - destinationfolder: str, where to save the file.
        - overwrite: bool, overwrite existing file.
        - t_range: tuple (t_min, t_max), optional
            The time range for interpolation (default: min & max of all stored results).
        - num_points: int, optional
            Number of points in the interpolated time grid (default: 100).
        - time_list: list or array, optional
            Explicit list of time points for interpolation (overrides t_range & num_points).
        """
        if not self.curves:
            print("No data to export.")
            return
        df = self.to_dataframe(t_range=t_range, num_points=num_points, time_list=time_list)
        filepath = os.path.join(destinationfolder, filename)
        if not overwrite and os.path.exists(filepath):
            print(f"File {filepath} already exists. Use overwrite=True to replace it.")
            return

        df.to_excel(filepath, index=False)
        print(f"Saved Excel file: {filepath}")


    def save_as_csv(self, filename="CF_data.csv", destinationfolder=os.getcwd(), overwrite=False,
                    t_range=None, num_points=200, time_list=None):
        """
        Save stored CF data to an Excel file.
        Parameters:
        - filename: str, Excel filename.
        - destinationfolder: str, where to save the file.
        - overwrite: bool, overwrite existing file.
        - t_range: tuple (t_min, t_max), optional
            The time range for interpolation (default: min & max of all stored results).
        - num_points: int, optional
            Number of points in the interpolated time grid (default: 100).
        - time_list: list or array, optional
            Explicit list of time points for interpolation (overrides t_range & num_points).
        """
        if not self.curves:
            print("No data to export.")
            return
        df = self.to_dataframe(t_range=t_range, num_points=num_points, time_list=time_list)
        filepath = os.path.join(destinationfolder, filename)
        if not overwrite and os.path.exists(filepath):
            print(f"File {filepath} already exists. Use overwrite=True to replace it.")
            return
        df.to_csv(filepath, index=False)
        print(f"Saved CSV file: {filepath}")


    def rgb(self):
        """Displays a categorized color chart with properly aligned headers."""
        plt.rc('text', usetex=False) # Enable LaTeX formatting for Matplotlib
        rgb()

Instance variables

var description
Expand source code
@property
def description(self):
    return self._description or f"comparison of {len(self.curves)} curves"
var name
Expand source code
@property
def name(self):
    return self._name or autoname(6)

Methods

def add(self, simulation_result, label=None, color=None, linestyle='-', linewidth=2, marker='o', markerfacecolor='auto', markeredgecolor='black', markersize=6, discrete=False)

Add a CF result to the container.

Parameters

simulation_result : SensPatankarResult
The simulation result.
discrete : bool, optional
Whether the data is discrete.
Expand source code
def add(self, simulation_result, label=None, color=None, linestyle="-", linewidth=2,
        marker='o', markerfacecolor='auto', markeredgecolor='black', markersize=6, discrete=False):
    """
    Add a CF result to the container.

    Parameters
    ----------
    simulation_result : SensPatankarResult
        The simulation result.
    discrete : bool, optional
        Whether the data is discrete.
    """
    if not isinstance(simulation_result, SensPatankarResult):
        raise TypeError(f"Expected SensPatankarResult, got {type(simulation_result).__name__}")
    label = label or f"plot{len(self.curves) + 1}"
    key = label[:80]
    if color is None:
        cmap = cm.get_cmap("tab10", len(self.curves) + 1)
        color = cmap(len(self.curves) % 10)
    if markerfacecolor == 'auto':
        markerfacecolor = color
    self.curves[key] = {
        "label": label,
        "color": color,
        "linestyle": linestyle,
        "linewidth": linewidth,
        "marker": marker,
        "markerfacecolor": markerfacecolor,
        "markeredgecolor": markeredgecolor,
        "markersize": markersize,
        "discrete": discrete
    }
    if discrete:
        self.curves[key].update({
            "times": simulation_result.t,
            "values": simulation_result.CF
        })
    else:
        self.curves[key].update({
            "tmin": simulation_result.t.min(),
            "tmax": simulation_result.t.max(),
            "interpolant": simulation_result.interp_CF
        })
def color(self, index, new_color)

Change the color of one or multiple curves.

Expand source code
def color(self, index, new_color):
    """Change the color of one or multiple curves."""
    self.update(index, color=new_color)
def colormap(self, name='viridis', ncolors=16, tooclearflag=True, reverse=False)

Generates a list of ncolors colors from the specified colormap.

Parameters:

name : str, optional (default="viridis") Name of the Matplotlib colormap to use. ncolors : int, optional (default=16) Number of colors to generate. tooclearflag : bool, optional (default=True) If True, applies tooclear() function to adjust brightness. reverse : bool, optional (default=False) If True, reverses the colormap.

Returns:

list of tuples List of RGB(A) colors in [0,1] range.

Raises:

ValueError If the colormap name is not recognized.

Expand source code
def colormap(self, name="viridis", ncolors=16, tooclearflag=True, reverse=False):
    """
    Generates a list of `ncolors` colors from the specified colormap.

    Parameters:
    -----------
    name : str, optional (default="viridis")
        Name of the Matplotlib colormap to use.
    ncolors : int, optional (default=16)
        Number of colors to generate.
    tooclearflag : bool, optional (default=True)
        If True, applies `tooclear` function to adjust brightness.
    reverse : bool, optional (default=False)
        If True, reverses the colormap.

    Returns:
    --------
    list of tuples
        List of RGB(A) colors in [0,1] range.

    Raises:
    -------
    ValueError
        If the colormap name is not recognized.
    """
    return colormap(name, ncolors, tooclear, reverse)
def delete(self, identifier)

Remove a stored curve by its index (int) or label (str).

Parameters

identifier : int or str
  • If int, removes the curve at the specified index.
  • If str, removes the curve with the matching label.
Expand source code
def delete(self, identifier):
    """
    Remove a stored curve by its index (int) or label (str).

    Parameters
    ----------
    identifier : int or str
        - If `int`, removes the curve at the specified index.
        - If `str`, removes the curve with the matching label.
    """
    if isinstance(identifier, int):
        key = self._get_key_by_index(identifier)
    elif isinstance(identifier, str):
        key = identifier[:40]  # Match the label-based key
        if key not in self.curves:
            print(f"No curve found with label '{identifier}'")
            return
    else:
        raise TypeError("Identifier must be an integer (index) or a string (label).")
def jet(self, ncolors=16, tooclear=True, reverse=False)

Generates colors from the Jet colormap.

Expand source code
def jet(self, ncolors=16, tooclear=True, reverse=False):
    """Generates colors from the Jet colormap."""
    return colormap("jet", ncolors, tooclear, reverse)
def label(self, index, new_label)

Change the label of one or multiple curves.

Expand source code
def label(self, index, new_label):
    """Change the label of one or multiple curves."""
    self.update(index, label=new_label)
def linestyle(self, index, new_style)

Change the linestyle of one or multiple curves.

Expand source code
def linestyle(self, index, new_style):
    """Change the linestyle of one or multiple curves."""
    self.update(index, linestyle=new_style)
def linewidth(self, index, new_value)

Change the linewidth of one or multiple curves.

Expand source code
def linewidth(self, index, new_value):
    """Change the linewidth of one or multiple curves."""
    self.update(index, linewidth=new_value)
def marker(self, index, new_marker)

Change the marker style of one or multiple curves.

Expand source code
def marker(self, index, new_marker):
    """Change the marker style of one or multiple curves."""
    self.update(index, marker=new_marker)
def markeredgecolor(self, index, new_edgecolor)

Change the marker edge color of one or multiple curves.

Expand source code
def markeredgecolor(self, index, new_edgecolor):
    """Change the marker edge color of one or multiple curves."""
    self.update(index, markeredgecolor=new_edgecolor)
def markerfacecolor(self, index, new_facecolor)

Change the marker face color of one or multiple curves.

Expand source code
def markerfacecolor(self, index, new_facecolor):
    """Change the marker face color of one or multiple curves."""
    self.update(index, markerfacecolor=new_facecolor)
def markersize(self, index, new_size)

Change the marker size of one or multiple curves.

Expand source code
def markersize(self, index, new_size):
    """Change the marker size of one or multiple curves."""
    self.update(index, markersize=new_size)
def plotCF(self, t_range=None)

Plot all stored CF curves in a single figure.

Parameters

t_range : tuple (t_min, t_max), optional
Time range for plotting. If None, uses each curve's own range.
plotconfig : dict, optional
Dictionary with plotting configuration, containing: - "tunit": Time unit label (e.g., 's'). - "Cunit": Concentration unit label (e.g., 'mg/L'). - "tscale": Time scaling factor. - "Cscale": Concentration scaling factor.
Expand source code
def plotCF(self, t_range=None):
    """
    Plot all stored CF curves in a single figure.

    Parameters
    ----------
    t_range : tuple (t_min, t_max), optional
        Time range for plotting. If None, uses each curve's own range.
    plotconfig : dict, optional
        Dictionary with plotting configuration, containing:
        - "tunit": Time unit label (e.g., 's').
        - "Cunit": Concentration unit label (e.g., 'mg/L').
        - "tscale": Time scaling factor.
        - "Cscale": Concentration scaling factor.
    """
    plt.rc('text', usetex=True) # Enable LaTeX formatting for Matplotlib
    # extract plotconfig
    plotconfig = self._plotconfig

    if not self.curves:
        print("No curves to plot.")
        return

    fig, ax = plt.subplots(figsize=(8, 6))

    for data in self.curves.values():
        if data["discrete"]:
            # Discrete data plotting
            ax.scatter(data["times"], data["values"], label=data["label"],
                       color=data["color"], marker=data["marker"],
                       facecolor=data["markerfacecolor"], edgecolor=data["markeredgecolor"],
                       s=data["markersize"]**2)
        else:
            # Continuous data plotting
            t_min, t_max = data["tmin"], data["tmax"]
            if t_range:
                t_min, t_max = max(t_min, t_range[0]), min(t_max, t_range[1])

            t_plot = np.linspace(t_min, t_max, 500)
            CF_plot = data["interpolant"](t_plot)
            ax.plot(t_plot, CF_plot, label=data["label"],
                    color=data["color"], linestyle=data["linestyle"], linewidth=data["linewidth"])

    # Configure the plot
    ax.set_xlabel(f'Time [{plotconfig["tunit"]}]' if plotconfig else "Time")
    ax.set_ylabel(f'Concentration in Food [{plotconfig["Cunit"]}]' if plotconfig else "CF")
    title_main = "Concentration in Food vs. Time"
    title_sub = rf"$\bf{{{self.name}}}$" + (f": {self.description}" if self.description else "")
    ax.set_title(f"{title_main}\n{title_sub}", fontsize=10)
    ax.text(0.5, 1.05, title_sub, fontsize=8, ha="center", va="bottom", transform=ax.transAxes)
    ax.set_title(title_main)
    ax.legend()
    ax.grid(True)
    plt.show()
    # store metadata
    setattr(fig,_fig_metadata_atrr_,f"cmp_pltCF_{self.name}")
    return fig
def rgb(self)

Displays a categorized color chart with properly aligned headers.

Expand source code
def rgb(self):
    """Displays a categorized color chart with properly aligned headers."""
    plt.rc('text', usetex=False) # Enable LaTeX formatting for Matplotlib
    rgb()
def save_as_csv(self, filename='CF_data.csv', destinationfolder='/home/olivi/natacha/python/utils', overwrite=False, t_range=None, num_points=200, time_list=None)

Save stored CF data to an Excel file. Parameters: - filename: str, Excel filename. - destinationfolder: str, where to save the file. - overwrite: bool, overwrite existing file. - t_range: tuple (t_min, t_max), optional The time range for interpolation (default: min & max of all stored results). - num_points: int, optional Number of points in the interpolated time grid (default: 100). - time_list: list or array, optional Explicit list of time points for interpolation (overrides t_range & num_points).

Expand source code
def save_as_csv(self, filename="CF_data.csv", destinationfolder=os.getcwd(), overwrite=False,
                t_range=None, num_points=200, time_list=None):
    """
    Save stored CF data to an Excel file.
    Parameters:
    - filename: str, Excel filename.
    - destinationfolder: str, where to save the file.
    - overwrite: bool, overwrite existing file.
    - t_range: tuple (t_min, t_max), optional
        The time range for interpolation (default: min & max of all stored results).
    - num_points: int, optional
        Number of points in the interpolated time grid (default: 100).
    - time_list: list or array, optional
        Explicit list of time points for interpolation (overrides t_range & num_points).
    """
    if not self.curves:
        print("No data to export.")
        return
    df = self.to_dataframe(t_range=t_range, num_points=num_points, time_list=time_list)
    filepath = os.path.join(destinationfolder, filename)
    if not overwrite and os.path.exists(filepath):
        print(f"File {filepath} already exists. Use overwrite=True to replace it.")
        return
    df.to_csv(filepath, index=False)
    print(f"Saved CSV file: {filepath}")
def save_as_excel(self, filename='CF_data.xlsx', destinationfolder='/home/olivi/natacha/python/utils', overwrite=False, t_range=None, num_points=1000, time_list=None)

Save stored CF data to an Excel file. Parameters: - filename: str, Excel filename. - destinationfolder: str, where to save the file. - overwrite: bool, overwrite existing file. - t_range: tuple (t_min, t_max), optional The time range for interpolation (default: min & max of all stored results). - num_points: int, optional Number of points in the interpolated time grid (default: 100). - time_list: list or array, optional Explicit list of time points for interpolation (overrides t_range & num_points).

Expand source code
def save_as_excel(self, filename="CF_data.xlsx", destinationfolder=os.getcwd(), overwrite=False,
                  t_range=None, num_points=1000, time_list=None):
    """
    Save stored CF data to an Excel file.
    Parameters:
    - filename: str, Excel filename.
    - destinationfolder: str, where to save the file.
    - overwrite: bool, overwrite existing file.
    - t_range: tuple (t_min, t_max), optional
        The time range for interpolation (default: min & max of all stored results).
    - num_points: int, optional
        Number of points in the interpolated time grid (default: 100).
    - time_list: list or array, optional
        Explicit list of time points for interpolation (overrides t_range & num_points).
    """
    if not self.curves:
        print("No data to export.")
        return
    df = self.to_dataframe(t_range=t_range, num_points=num_points, time_list=time_list)
    filepath = os.path.join(destinationfolder, filename)
    if not overwrite and os.path.exists(filepath):
        print(f"File {filepath} already exists. Use overwrite=True to replace it.")
        return

    df.to_excel(filepath, index=False)
    print(f"Saved Excel file: {filepath}")
def to_dataframe(self, t_range=None, num_points=1000, time_list=None)

Export interpolated CF data as a pandas DataFrame. Parameters: - t_range: tuple (t_min, t_max), optional The time range for interpolation (default: min & max of all stored results). - num_points: int, optional Number of points in the interpolated time grid (default: 100). - time_list: list or array, optional Explicit list of time points for interpolation (overrides t_range & num_points). Returns: - pd.DataFrame A DataFrame with time as index and CF values as columns (one per simulation).

Expand source code
def to_dataframe(self, t_range=None, num_points=1000, time_list=None):
    """
    Export interpolated CF data as a pandas DataFrame.
    Parameters:
    - t_range: tuple (t_min, t_max), optional
        The time range for interpolation (default: min & max of all stored results).
    - num_points: int, optional
        Number of points in the interpolated time grid (default: 100).
    - time_list: list or array, optional
        Explicit list of time points for interpolation (overrides t_range & num_points).
    Returns:
    - pd.DataFrame
        A DataFrame with time as index and CF values as columns (one per simulation).
    """
    if not self.curves:
        print("No data to export.")
        return pd.DataFrame()

    # Determine the time grid
    if time_list is not None:
        t_grid = np.array(time_list)
    else:
        all_t_min = min(data["tmin"] for data in self.curves.values())
        all_t_max = max(data["tmax"] for data in self.curves.values())
        # Default time range
        t_min, t_max = t_range if t_range else (all_t_min, all_t_max)
        # Create evenly spaced time grid
        t_grid = np.linspace(t_min, t_max, num_points)
    # Create DataFrame with time as index
    df = pd.DataFrame({"Time (s)": t_grid})

    # Interpolate each stored CF curve at the common time grid
    for key, data in self.curves.items():
        df[data["label"]] = data["interpolant"](t_grid)
    return df
def update(self, index, label=None, linestyle=None, linewidth=None, color=None, marker=None, markersize=None, markerfacecolor=None, markeredgecolor=None)

Update properties of one or multiple curves.

Parameters

index : int or list of int
Index or indices of the curve(s) to update.
label : str, optional
New label for the curve(s).
linestyle : str, optional
New linestyle for the curve(s).
linewidth : float, optional
New linewidth for the curve(s).
color : str or tuple, optional
New color for the curve(s).
marker : str, optional
New marker style for discrete data.
markersize : float, optional
New marker size for discrete data.
markerfacecolor : str or tuple, optional
New marker face color.
markeredgecolor : str or tuple, optional
New marker edge color.
Expand source code
def update(self, index, label=None, linestyle=None, linewidth=None, color=None,
           marker=None, markersize=None, markerfacecolor=None, markeredgecolor=None):
    """
    Update properties of one or multiple curves.

    Parameters
    ----------
    index : int or list of int
        Index or indices of the curve(s) to update.
    label : str, optional
        New label for the curve(s).
    linestyle : str, optional
        New linestyle for the curve(s).
    linewidth : float, optional
        New linewidth for the curve(s).
    color : str or tuple, optional
        New color for the curve(s).
    marker : str, optional
        New marker style for discrete data.
    markersize : float, optional
        New marker size for discrete data.
    markerfacecolor : str or tuple, optional
        New marker face color.
    markeredgecolor : str or tuple, optional
        New marker edge color.
    """
    keys = self._get_keys_by_indices(index)

    for key in keys:
        if label is not None:
            self.curves[key]["label"] = label
        if linestyle is not None:
            self.curves[key]["linestyle"] = linestyle
        if linewidth is not None:
            self.curves[key]["linewidth"] = linewidth
        if color is not None:
            self.curves[key]["color"] = color
        if marker is not None:
            self.curves[key]["marker"] = marker
        if markersize is not None:
            self.curves[key]["markersize"] = markersize
        if markerfacecolor is not None:
            self.curves[key]["markerfacecolor"] = markerfacecolor
        if markeredgecolor is not None:
            self.curves[key]["markeredgecolor"] = markeredgecolor
def viridis(self, ncolors=16, tooclear=True, reverse=False)

Generates colors from the Viridis colormap.

Expand source code
def viridis(self, ncolors=16, tooclear=True, reverse=False):
    """Generates colors from the Viridis colormap."""
    return colormap("viridis", ncolors, tooclear, reverse)
class Cprofile (x=None, Cx=None)

A class representing a concentration profile C(x) for migration simulations.

This class allows storing, interpolating, and analyzing the concentration of a migrating substance across a spatial domain.

Attributes:

x : np.ndarray 1D array of spatial positions. Cx : np.ndarray 1D array of corresponding concentration values at x.

Methods:

interp(x_new) Interpolates the concentration at new spatial positions. integrate() Computes the integral of the concentration profile. mean_concentration() Computes the mean concentration over the spatial domain. find_indices_xrange(x_range) Returns indices where x falls within a specified range. find_indices_Cxrange(Cx_range) Returns indices where Cx falls within a specified concentration range. assign_values(indices, values) Assigns new concentration values at specified indices.

Example:

x = np.linspace(0, 1, 10)
Cx = np.exp(-x)
profile = Cprofile(x, Cx)

# Interpolating at new points
new_x = np.linspace(0, 1, 50)
interpolated_Cx = profile.interp(new_x)

Initialize the concentration profile Cx(x).

Expand source code
class Cprofile:
    """
    A class representing a concentration profile C(x) for migration simulations.

    This class allows storing, interpolating, and analyzing the concentration of a
    migrating substance across a spatial domain.

    Attributes:
    -----------
    x : np.ndarray
        1D array of spatial positions.
    Cx : np.ndarray
        1D array of corresponding concentration values at `x`.

    Methods:
    --------
    interp(x_new)
        Interpolates the concentration at new spatial positions.
    integrate()
        Computes the integral of the concentration profile.
    mean_concentration()
        Computes the mean concentration over the spatial domain.
    find_indices_xrange(x_range)
        Returns indices where `x` falls within a specified range.
    find_indices_Cxrange(Cx_range)
        Returns indices where `Cx` falls within a specified concentration range.
    assign_values(indices, values)
        Assigns new concentration values at specified indices.

    Example:
    --------
    ```python
    x = np.linspace(0, 1, 10)
    Cx = np.exp(-x)
    profile = Cprofile(x, Cx)

    # Interpolating at new points
    new_x = np.linspace(0, 1, 50)
    interpolated_Cx = profile.interp(new_x)
    ```
    """

    def __init__(self, x=None, Cx=None):
        """Initialize the concentration profile Cx(x)."""
        if x is None or Cx is None:
            raise ValueError("Syntax: myprofile = Cprofile(x, Cx). Both x and Cx are mandatory.")
        self.x = np.array(x, dtype=float).reshape(-1)  # Ensure 1D NumPy array
        self.Cx = np.array(Cx, dtype=float).reshape(-1)  # Ensure 1D NumPy array
        # Check if x is strictly increasing
        if np.any(np.diff(self.x) <= 0):
            raise ValueError("x values must be strictly increasing.")
        # Create the interpolation function
        self._interp_func = interp1d(
            self.x, self.Cx, kind="linear", fill_value=0, bounds_error=False
        )

    def interp(self, x_new):
        """
        Interpolate concentration values at new x positions.

        Parameters:
            x_new (array-like): New positions where concentrations are needed.

        Returns:
            np.ndarray: Interpolated concentration values.
        """
        x_new = np.array(x_new, dtype=float)  # Ensure NumPy array
        return self._interp_func(x_new)

    def integrate(self):
        """
        Compute the integral of Cx over x using Simpson's rule.

        Returns:
            float: The integral ∫ Cx dx.
        """
        return simpson(self.Cx, self.x)

    def mean_concentration(self):
        """
        Compute the mean concentration using the integral.

        Returns:
            float: The mean value of Cx.
        """
        return self.integrate() / (self.x[-1] - self.x[0])

    def find_indices_xrange(self, x_range):
        """
        Find indices where x is within a specified range.

        Parameters:
            x_range (tuple): The (min, max) range of x.

        Returns:
            np.ndarray: Indices where x falls within the range.
        """
        xmin, xmax = x_range
        return np.where((self.x >= xmin) & (self.x <= xmax))[0]

    def find_indices_Cxrange(self, Cx_range=(0, np.inf)):
        """
        Find indices where Cx is within a specified range.

        Parameters:
            Cx_range (tuple): The (min, max) range of Cx.

        Returns:
            np.ndarray: Indices where Cx falls within the range.
        """
        Cmin, Cmax = Cx_range
        return np.where((self.Cx >= Cmin) & (self.Cx <= Cmax))[0]

    def assign_values(self, indices, values):
        """
        Assign new values to Cx at specified indices.

        Parameters:
            indices (array-like): Indices where values should be assigned.
            values (float or array-like): New values to assign.

        Raises:
            ValueError: If the number of values does not match the number of indices.
        """
        indices = np.array(indices, dtype=int)
        if np.isscalar(values):
            self.Cx[indices] = values  # Assign single value to all indices
        else:
            values = np.array(values, dtype=float)
            if values.shape[0] != indices.shape[0]:
                raise ValueError("Number of values must match the number of indices.")
            self.Cx[indices] = values

    def __repr__(self):
        """Representation of the profile."""
        stats_x = {
            "min": np.min(self.x),
            "max": np.max(self.x),
            "mean": np.mean(self.x),
            "median": np.median(self.x),
            "std": np.std(self.x),
        }
        stats_Cx = {
            "min": np.min(self.Cx),
            "max": np.max(self.Cx),
            "mean": np.mean(self.Cx),
            "median": np.median(self.Cx),
            "std": np.std(self.Cx),
        }

        print(
            f"Cprofile: {len(self.x)} points\n",
            f"x range: [{stats_x['min']:.4g}, {stats_x['max']:.4g}]\n",
            f"Cx range: [{stats_Cx['min']:.4g}, {stats_Cx['max']:.4g}]\n",
            f"x stats: mean={stats_x['mean']:.4g}, median={stats_x['median']:.4g}, std={stats_x['std']:.4g}\n",
            f"Cx stats: mean={stats_Cx['mean']:.4g}, median={stats_Cx['median']:.4g}, std={stats_Cx['std']:.4g}"
        )
        return str(self)

    def __str__(self):
        """Returns a formatted string representation of the profile."""
        return f"<{self.__class__.__name__}: including {len(self.x)} points>"

Methods

def assign_values(self, indices, values)

Assign new values to Cx at specified indices.

Parameters

indices (array-like): Indices where values should be assigned. values (float or array-like): New values to assign.

Raises

ValueError
If the number of values does not match the number of indices.
Expand source code
def assign_values(self, indices, values):
    """
    Assign new values to Cx at specified indices.

    Parameters:
        indices (array-like): Indices where values should be assigned.
        values (float or array-like): New values to assign.

    Raises:
        ValueError: If the number of values does not match the number of indices.
    """
    indices = np.array(indices, dtype=int)
    if np.isscalar(values):
        self.Cx[indices] = values  # Assign single value to all indices
    else:
        values = np.array(values, dtype=float)
        if values.shape[0] != indices.shape[0]:
            raise ValueError("Number of values must match the number of indices.")
        self.Cx[indices] = values
def find_indices_Cxrange(self, Cx_range=(0, inf))

Find indices where Cx is within a specified range.

Parameters

Cx_range (tuple): The (min, max) range of Cx.

Returns

np.ndarray
Indices where Cx falls within the range.
Expand source code
def find_indices_Cxrange(self, Cx_range=(0, np.inf)):
    """
    Find indices where Cx is within a specified range.

    Parameters:
        Cx_range (tuple): The (min, max) range of Cx.

    Returns:
        np.ndarray: Indices where Cx falls within the range.
    """
    Cmin, Cmax = Cx_range
    return np.where((self.Cx >= Cmin) & (self.Cx <= Cmax))[0]
def find_indices_xrange(self, x_range)

Find indices where x is within a specified range.

Parameters

x_range (tuple): The (min, max) range of x.

Returns

np.ndarray
Indices where x falls within the range.
Expand source code
def find_indices_xrange(self, x_range):
    """
    Find indices where x is within a specified range.

    Parameters:
        x_range (tuple): The (min, max) range of x.

    Returns:
        np.ndarray: Indices where x falls within the range.
    """
    xmin, xmax = x_range
    return np.where((self.x >= xmin) & (self.x <= xmax))[0]
def integrate(self)

Compute the integral of Cx over x using Simpson's rule.

Returns

float
The integral ∫ Cx dx.
Expand source code
def integrate(self):
    """
    Compute the integral of Cx over x using Simpson's rule.

    Returns:
        float: The integral ∫ Cx dx.
    """
    return simpson(self.Cx, self.x)
def interp(self, x_new)

Interpolate concentration values at new x positions.

Parameters

x_new (array-like): New positions where concentrations are needed.

Returns

np.ndarray
Interpolated concentration values.
Expand source code
def interp(self, x_new):
    """
    Interpolate concentration values at new x positions.

    Parameters:
        x_new (array-like): New positions where concentrations are needed.

    Returns:
        np.ndarray: Interpolated concentration values.
    """
    x_new = np.array(x_new, dtype=float)  # Ensure NumPy array
    return self._interp_func(x_new)
def mean_concentration(self)

Compute the mean concentration using the integral.

Returns

float
The mean value of Cx.
Expand source code
def mean_concentration(self):
    """
    Compute the mean concentration using the integral.

    Returns:
        float: The mean value of Cx.
    """
    return self.integrate() / (self.x[-1] - self.x[0])
class PrintableFigure (figsize=None, dpi=None, *, facecolor=None, edgecolor=None, linewidth=0.0, frameon=None, subplotpars=None, tight_layout=None, constrained_layout=None, layout=None, **kwargs)

Custom Figure class with print methods.

Parameters

figsize : 2-tuple of floats, default: :rc:figure.figsize``
Figure dimension (width, height) in inches.
dpi : float, default: :rc:figure.dpi``
Dots per inch.
facecolor : default: :rc:figure.facecolor``
The figure patch facecolor.
edgecolor : default: :rc:figure.edgecolor``
The figure patch edge color.
linewidth : float
The linewidth of the frame (i.e. the edge linewidth of the figure patch).
frameon : bool, default: :rc:figure.frameon``
If False, suppress drawing the figure background patch.
subplotpars : ~matplotlib.gridspec.SubplotParams
Subplot parameters. If not given, the default subplot parameters :rc:figure.subplot.* are used.
tight_layout : bool or dict, default: :rc:figure.autolayout``

Whether to use the tight layout mechanism. See .set_tight_layout.

Discouraged

The use of this parameter is discouraged. Please use layout='tight' instead for the common case of tight_layout=True and use .set_tight_layout otherwise.

constrained_layout : bool, default: :rc:figure.constrained_layout.use``

This is equal to layout='constrained'.

Discouraged

The use of this parameter is discouraged. Please use layout='constrained' instead.

layout : {'constrained', 'compressed', 'tight', 'none',.LayoutEngine, None}, default: None

The layout mechanism for positioning of plot elements to avoid overlapping Axes decorations (labels, ticks, etc). Note that layout managers can have significant performance penalties.

  • 'constrained': The constrained layout solver adjusts Axes sizes to avoid overlapping Axes decorations. Can handle complex plot layouts and colorbars, and is thus recommended.

See :ref:constrainedlayout_guide for examples.

  • 'compressed': uses the same algorithm as 'constrained', but removes extra space between fixed-aspect-ratio Axes. Best for simple grids of Axes.

  • 'tight': Use the tight layout mechanism. This is a relatively simple algorithm that adjusts the subplot parameters so that decorations do not overlap.

See :ref:tight_layout_guide for examples.

  • 'none': Do not use a layout engine.

  • A .LayoutEngine instance. Builtin layout classes are .ConstrainedLayoutEngine and .TightLayoutEngine, more easily accessible by 'constrained' and 'tight'. Passing an instance allows third parties to provide their own layout engine.

If not given, fall back to using the parameters tight_layout and constrained_layout, including their config defaults :rc:figure.autolayout and :rc:figure.constrained_layout.use.

Other Parameters

**kwargs : .Figure</code> properties, optional
Properties: agg_filter: a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image alpha: scalar or None animated: bool canvas: FigureCanvas clip_box: ~matplotlib.transforms.BboxBase or None clip_on: bool clip_path: Patch or (Path, Transform) or None constrained_layout: unknown constrained_layout_pads: unknown dpi: float edgecolor: :mpltype:color facecolor: :mpltype:color figheight: float figure: unknown figwidth: float frameon: bool gid: str in_layout: bool label: object layout_engine: {'constrained', 'compressed', 'tight', 'none', .LayoutEngine, None} linewidth: number mouseover: bool path_effects: list of .AbstractPathEffect picker: None or bool or float or callable rasterized: bool size_inches: (float, float) or float sketch_params: (scale: float, length: float, randomness: float) snap: bool or None tight_layout: unknown transform: ~matplotlib.transforms.Transform url: str visible: bool zorder: float
Expand source code
class PrintableFigure(Figure):
    """Custom Figure class with print methods."""

    def print(self, filename="", destinationfolder=os.getcwd(), overwrite=False, dpi=300):
        print_figure(self, filename, destinationfolder, overwrite, dpi)

    def print_png(self, filename="", destinationfolder=os.getcwd(), overwrite=False, dpi=300):
        print_png(self, filename, destinationfolder, overwrite, dpi)

    def print_pdf(self, filename="", destinationfolder=os.getcwd(), overwrite=False, dpi=300):
        print_pdf(self, filename, destinationfolder, overwrite, dpi)

Ancestors

  • matplotlib.figure.Figure
  • matplotlib.figure.FigureBase
  • matplotlib.artist.Artist

Methods

def print(self, filename='', destinationfolder='/home/olivi/natacha/python/utils', overwrite=False, dpi=300)
Expand source code
def print(self, filename="", destinationfolder=os.getcwd(), overwrite=False, dpi=300):
    print_figure(self, filename, destinationfolder, overwrite, dpi)
def print_pdf(self, filename='', destinationfolder='/home/olivi/natacha/python/utils', overwrite=False, dpi=300)
Expand source code
def print_pdf(self, filename="", destinationfolder=os.getcwd(), overwrite=False, dpi=300):
    print_pdf(self, filename, destinationfolder, overwrite, dpi)
def print_png(self, filename='', destinationfolder='/home/olivi/natacha/python/utils', overwrite=False, dpi=300)
Expand source code
def print_png(self, filename="", destinationfolder=os.getcwd(), overwrite=False, dpi=300):
    print_png(self, filename, destinationfolder, overwrite, dpi)
def set(self, *, agg_filter=<UNSET>, alpha=<UNSET>, animated=<UNSET>, canvas=<UNSET>, clip_box=<UNSET>, clip_on=<UNSET>, clip_path=<UNSET>, constrained_layout=<UNSET>, constrained_layout_pads=<UNSET>, dpi=<UNSET>, edgecolor=<UNSET>, facecolor=<UNSET>, figheight=<UNSET>, figwidth=<UNSET>, frameon=<UNSET>, gid=<UNSET>, in_layout=<UNSET>, label=<UNSET>, layout_engine=<UNSET>, linewidth=<UNSET>, mouseover=<UNSET>, path_effects=<UNSET>, picker=<UNSET>, rasterized=<UNSET>, size_inches=<UNSET>, sketch_params=<UNSET>, snap=<UNSET>, tight_layout=<UNSET>, transform=<UNSET>, url=<UNSET>, visible=<UNSET>, zorder=<UNSET>)

Set multiple properties at once.

Supported properties are

Properties

agg_filter: a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array and two offsets from the bottom left corner of the image alpha: scalar or None animated: bool canvas: FigureCanvas clip_box: ~matplotlib.transforms.BboxBase or None clip_on: bool clip_path: Patch or (Path, Transform) or None constrained_layout: unknown constrained_layout_pads: unknown dpi: float edgecolor: :mpltype:color facecolor: :mpltype:color figheight: float figure: unknown figwidth: float frameon: bool gid: str in_layout: bool label: object layout_engine: {'constrained', 'compressed', 'tight', 'none', .LayoutEngine, None} linewidth: number mouseover: bool path_effects: list of .AbstractPathEffect picker: None or bool or float or callable rasterized: bool size_inches: (float, float) or float sketch_params: (scale: float, length: float, randomness: float) snap: bool or None tight_layout: unknown transform: ~matplotlib.transforms.Transform url: str visible: bool zorder: float

Expand source code
cls.set = lambda self, **kwargs: Artist.set(self, **kwargs)
class SensPatankarResult (name, description, ttarget, t, C, CF, fc, f, x, Cx, tC, C0eq, timebase, restart, restart_unsecure, xi, Cxi, createcontainer=True, container=None, discrete=False)

Container for the results of the 1D mass transfer simulation performed by senspatankar().

Attributes

ttarget : ndarray with shape (1,)
target simulation time It is a duration not an absolute time.
CFtarget : ndarray with shape (1,)
CF value at ttarget
Cxtarget : ndarray with shape (npoints,)
Cx concentration profile at t=ttarget
t : ndarray with shape (ntimes,)
1D array of time points (in seconds) covering from 0 to 2*ttarget It is a duration not an absolute time.
C : ndarray with shape (ntimes,)
1D array of mean concentration in the packaging (averaged over all packaging nodes) at each time step. Shape: (ntimes,).
CF : ndarray with shape (ntimes,)
1D array of concentration in the food (left boundary) at each time step. Shape: (ntimes,).
fc : ndarray with shape (ntimes,)
1D array of the cumulative flux into the food. Shape: (ntimes,).
f : ndarray with shape (ntimes,)
1D array of the instantaneous flux into the food. Shape: (ntimes,).
x : ndarray with shape (npoints,)
1D array of the position coordinates of all packaging nodes (including sub-nodes). npoints = 3 * number of original FV elements (interfaces e and w are included).
Cx : ndarray with shape (ntimes,npoints)
2D array of the concentration profile across the packaging thickness for each time step. Shape: (ntimes, 3 * number_of_nodes). Each row corresponds to one time step.
tC : ndarray with shape (ntimes,)
1D array of the dimensionless time points
C0eq : ndarray with shape (1,)
Reference (equilibrium) concentration scaling factor.
timebase : float
Characteristic time scale (l_ref^2 / D_ref) used to normalize the solution.
interp_CF : scipy.interpolate._interpolate.interp1d
1D interpolant of CF vs time
interp_Cx : scipy.interpolate._interpolate.interp1d
1F interpolant of Cx vs time
restart : restartfile_senspatankar object
Restart object (see restartfile_senspatankar doc)

Constructor for simulation results.

Expand source code
class SensPatankarResult:
    """
    Container for the results of the 1D mass transfer simulation performed by ``senspatankar``.

    Attributes
    ----------
    ttarget : ndarray with shape (1,)
        target simulation time
        It is a duration not an absolute time.
    CFtarget : ndarray with shape (1,)
        CF value at ttarget
    Cxtarget : ndarray with shape (npoints,)
         Cx concentration profile at t=ttarget
    t : ndarray with shape (ntimes,)
        1D array of time points (in seconds) covering from 0 to 2*ttarget
        It is a duration not an absolute time.
    C : ndarray with shape (ntimes,)
        1D array of mean concentration in the packaging (averaged over all packaging nodes)
        at each time step. Shape: (ntimes,).
    CF : ndarray with shape (ntimes,)
        1D array of concentration in the food (left boundary) at each time step. Shape: (ntimes,).
    fc : ndarray with shape (ntimes,)
        1D array of the cumulative flux into the food. Shape: (ntimes,).
    f : ndarray with shape (ntimes,)
        1D array of the instantaneous flux into the food. Shape: (ntimes,).
    x : ndarray with shape (npoints,)
        1D array of the position coordinates of all packaging nodes (including sub-nodes).
        npoints = 3 * number of original FV elements (interfaces e and w are included).
    Cx : ndarray with shape (ntimes,npoints)
        2D array of the concentration profile across the packaging thickness for each time step.
        Shape: (ntimes, 3 * number_of_nodes). Each row corresponds to one time step.
    tC : ndarray with shape (ntimes,)
        1D array of the dimensionless time points
    C0eq : ndarray with shape (1,)
        Reference (equilibrium) concentration scaling factor.
    timebase : float
        Characteristic time scale (l_ref^2 / D_ref) used to normalize the solution.
    interp_CF : scipy.interpolate._interpolate.interp1d
        1D interpolant of CF vs time
    interp_Cx : scipy.interpolate._interpolate.interp1d
        1F interpolant of Cx vs time
    restart : restartfile_senspatankar object
        Restart object (see restartfile_senspatankar doc)

    """

    def __init__(self, name, description, ttarget, t, C, CF, fc, f, x, Cx, tC, C0eq, timebase,
                 restart,restart_unsecure,xi,Cxi,
                 _plotconfig=None, createcontainer=True, container=None, discrete=False):
        """Constructor for simulation results."""
        self.name = name
        self.description = description
        self.ttarget = ttarget
        self.t = t
        self.C = C
        self.CF = CF
        self.fc = fc
        self.f = f
        self.x = x
        self.Cx = Cx
        self.tC = tC
        self.C0eq = C0eq
        self.timebase = timebase
        self.discrete = discrete  # New flag for discrete data

        # Interpolation for CF and Cx
        self.interp_CF = interp1d(t, CF, kind="linear", fill_value="extrapolate")
        self.CFtarget = self.interp_CF(ttarget)
        self.interp_Cx = interp1d(t, Cx.T, kind="linear", axis=1, fill_value="extrapolate")
        self.Cxtarget = self.interp_Cx(ttarget)

        # Restart handling
        if xi is not None and Cxi is not None:
            Cxi_interp = interp1d(t, Cxi.T, kind="linear", axis=1, fill_value="extrapolate")
            Cxi_at_t = Cxi_interp(ttarget)
            restart.freezeCF(ttarget, self.CFtarget)
            restart.freezeCx(xi, Cxi_at_t)
        self.restart = restart # secure restart file (cannot be modified from outside)
        self.restart_unsecure = restart_unsecure # unsecure one (can be modified from outside)

        # Plot configuration
        self._plotconfig = _plotconfig if _plotconfig else plotconfig

        # Store state for simulation chaining
        self.savestate(self.restart.inputs["multilayer"], self.restart.inputs["medium"])

        # Default container for results comparison
        if createcontainer:
            if container is None:
                self.comparison = CFSimulationContainer(name=name)
                currentname = "reference"
            elif isinstance(container, CFSimulationContainer):
                self.comparison = container
                currentname = name
            else:
                raise TypeError(f"container must be a CFSimulationContainer, not {type(container).__name__}")
            self.comparison.add(self, label=currentname, color="Crimson", linestyle="-", linewidth=2)

        # Distance pair
        self._distancepair = None


    def pseudoexperiment(self, npoints=25, std_relative=0.05, randomtime=False, autorecord=False, seed=None, t=None, CF=None, scale='linear'):
        """
        Generates discrete pseudo-experimental data from high-resolution simulated results.

        Parameters
        ----------
        npoints : int, optional
            Number of discrete time points to select (default: 25).
        std_relative : float, optional
            Relative standard deviation for added noise (default: 0.05).
        randomtime : bool, optional
            If True, picks random time points; otherwise, uses uniform spacing or a sqrt scale (default: False).
        autorecord : bool, optional
            If True, automatically adds the generated result to the container (default: False).
        seed : int, optional
            Random seed for reproducibility.
        t : list or np.ndarray, optional
            Specific time points to use instead of generated ones. If provided, `CF` must also be supplied.
        CF : list or np.ndarray, optional
            Specific CF values to use at the provided `t` time points. Must have the same length as `t`.
        scale : str, optional
            Determines how time points are distributed when `randomtime=False`:
            - "linear" (default): Uniformly spaced time points.
            - "sqrt": Time points are distributed more densely at the beginning using a square root scale.

        Returns
        -------
        SensPatankarResult
            A new SensPatankarResult object flagged as discrete.

        Raises
        ------
        ValueError
            If `t` and `CF` are provided but have mismatched lengths.
        """

        if seed is not None:
            np.random.seed(seed)

        if t is not None:
            t_discrete = np.array(t, dtype=float)
            if CF is None or len(CF) != len(t_discrete):
                raise ValueError("When providing t, CF values must be provided and have the same length.")
            CF_discrete_noisy = np.array(CF, dtype=float)
        else:
            if randomtime:
                t_discrete = np.sort(np.random.uniform(self.t.min(), self.t.max(), npoints))
            else:
                if scale == 'sqrt':
                    t_discrete = np.linspace(np.sqrt(self.t.min()), np.sqrt(self.t.max()), npoints) ** 2
                else:
                    t_discrete = np.linspace(self.t.min(), self.t.max(), npoints)

            CF_discrete = self.interp_CF(t_discrete)
            noise = np.random.normal(loc=0, scale=std_relative * CF_discrete)
            CF_discrete_noisy = CF_discrete + noise
            CF_discrete_noisy = np.clip(CF_discrete_noisy, a_min=0, a_max=None)

        discrete_result = SensPatankarResult(
            name=f"{self.name}_discrete",
            description=f"Discrete pseudo-experimental data from {self.name}",
            ttarget=self.ttarget,
            t=t_discrete,
            C=np.zeros_like(t_discrete),
            CF=CF_discrete_noisy,
            fc=np.zeros_like(t_discrete),
            f=np.zeros_like(t_discrete),
            x=self.x,
            Cx=np.zeros((len(t_discrete), len(self.x))),
            tC=self.tC,
            C0eq=self.C0eq,
            timebase=self.timebase,
            restart=self.restart,
            restart_unsecure=self.restart_unsecure,
            xi=None,
            Cxi=None,
            _plotconfig=self._plotconfig,
            discrete=True
        )
        if autorecord:
            self.comparison.add(discrete_result, label="pseudo-experiment", color="black", marker='o', discrete=True)
        return discrete_result

    @property
    def currrentdistance(self):
        """returns the square distance to the last distance pair"""
        return self.distanceSq(self._distancepair) if self._distancepair is not None else None

    def __sub__(self, other):
        """Overloads the operator - for returning a square distance function"""
        return lambda: self.distanceSq(other)

    def distanceSq(self, other, std_relative=0.05, npoints=100, cum=True):
        """
        Compute the squared distance between two SensPatankarResult instances.

        Parameters
        ----------
        other : SensPatankarResult
            The other instance to compare against.
        std_relative : float, optional
            Relative standard deviation for normalization (default: 0.05).
        npoints : int, optional
            Number of points for interpolation if both are continuous (default: 100).
        cum : bool, optional
            If True, return the cumulative sum; otherwise, return pointwise values.

        Returns
        -------
        float or np.ndarray
            The squared normalized error.

        Raises
        ------
        TypeError
            If `other` is not an instance of SensPatankarResult.
        ValueError
            If the time ranges do not overlap or if discrete instances have different time points.
        """
        if not isinstance(other, SensPatankarResult):
            raise TypeError(f"other must be a SensPatankarResult not a {type(other).__name__}")

        # refresh
        self._distancepair = other # used for distance evaluation as self.currentdistance
        # Find common time range
        tmin, tmax = max(self.t.min(), other.t.min()), min(self.t.max(), other.t.max())
        if tmin >= tmax:
            raise ValueError("No overlapping time range between instances.")
        if not self.discrete and not other.discrete:
            # Case 1: Both are continuous
            t_common = np.linspace(tmin, tmax, npoints)
            CF_self = self.interp_CF(t_common)
            CF_other = other.interp_CF(t_common)
        elif self.discrete and not other.discrete:
            # Case 2: self is discrete, other is continuous
            t_common = self.t
            CF_self = self.CF
            CF_other = other.interp_CF(self.t)
        elif not self.discrete and other.discrete:
            # Case 3: self is continuous, other is discrete
            t_common = other.t
            CF_self = self.interp_CF(other.t)
            CF_other = other.CF
        else:
            # Case 4: Both are discrete
            if not np.array_equal(self.t, other.t):
                raise ValueError("Discrete instances must have the same time points.")
            t_common = self.t
            CF_self = self.CF
            CF_other = other.CF
        # Compute squared normalized error
        m = (CF_self + CF_other) / 2
        m[m == 0] = 1  # Avoid division by zero, results in zero error where both are zero
        e2 = ((CF_self - CF_other) / (m * std_relative)) ** 2
        return np.sum(e2) if cum else e2

    def fit(self,other,disp=True,std_relative=0.05,maxiter=100,xatol=1e-3,fatol=1e-3):
        """Fits simulation parameters D and k to fit a discrete CF data"""
        if not isinstance(other,SensPatankarResult):
            raise TypeError(f"other must be a SensPatankarResult not a {type(other).__name__}")
        if self.discrete:
            raise ValueError("the current instance contains discrete data, use it as other")
        if not other.discrete:
            raise ValueError("only discrete CF results can be fitted")
        # retrieve current Dlink and klink
        Dlink = self.restart_unsecure.inputs["multilayer"].Dlink
        klink = self.restart_unsecure.inputs["multilayer"].klink
        if Dlink is None and klink is None:
            raise ValueError("provide at least a Dlink or klink object")
        if Dlink is not None and not isinstance(Dlink,layerLink):
            raise TypeError(f"Dlink must be a layerLink not a {type(Dlink).__name__}")
        if klink is not None and not isinstance(klink,layerLink):
            raise TypeError(f"klink must be a layerLink not a {type(klink).__name__}")
        # options for the optimizer
        optimOptions = {"disp": disp, "maxiter": maxiter, "xatol": xatol, "fatol": fatol}
        # params is assembled by concatenating -log(Dlink.values) and log(klink.values)
        params_initial = np.concatenate((-np.log(Dlink.values),np.log(klink.values)))
        maskD = np.concatenate((np.ones(Dlink.nzlength, dtype=bool), np.zeros(klink.nzlength, dtype=bool)))
        maskk = np.concatenate((np.zeros(Dlink.nzlength, dtype=bool), np.ones(klink.nzlength, dtype=bool)))
        # distance criterion
        d2 = lambda: self.distanceSq(other, std_relative=0.05) # d2 = lambda: self - other works also
        def objective(params):
            """objective function, all parameters are passed via layerLink"""
            logD = params[maskD]
            logk = params[maskk]
            Dlink.values = np.exp(-logD)
            klink.values = np.exp(logk)
            self.rerun(name="optimizer",color="OrangeRed",linewidth=4)
            return d2()
        def callback(params):
            """Called at each iteration to display current values."""
            Dtmp, ktmp = np.exp(-params[maskD]), np.exp(params[maskk])
            print("Fitting Iteration:\n",f"D={Dtmp} [m²/s]\n",f"k={ktmp} [a.u.]\n")
        # do the optimization
        result = minimize(objective,
                          params_initial,
                          method='Nelder-Mead',
                          callback=callback,
                          options=optimOptions)
        # extract the solution, be sure it is updated
        Dlink.values, klink.values = np.exp(-result.x[maskD]), np.exp(result.x[maskk])
        return result


    def savestate(self,multilayer,medium):
        """Saves senspantankar inputs for simulation chaining"""
        self._lastmedium = medium
        self._lastmultilayer = multilayer
        self._isstatesaved = True

    def update(self, **kwargs):
        """
        Update modifiable parameters of the SensPatankarResult object.
        Parameters:
            - name (str): New name for the object.
            - description (str): New description.
            - tscale (float or tuple): Time scale (can be tuple like (1, "day")).
            - tunit (str): Time unit.
            - lscale (float or tuple): Length scale (can be tuple like (1e-6, "µm")).
            - lunit (str): Length unit.
            - Cscale (float or tuple): Concentration scale (can be tuple like (1, "a.u.")).
            - Cunit (str): Concentration unit.
        """
        def checkunits(value):
            """Helper function to handle unit conversion for scale/unit tuples."""
            if isinstance(value, tuple) and len(value) == 2:
                scale, unit = check_units(value)
                scale, unit = np.array(scale, dtype=float), str(unit)  # Ensure correct types
                return scale.item(), unit  # Convert numpy array to float
            elif isinstance(value, (int, float, np.ndarray)):
                value = np.array(value, dtype=float)  # Ensure float
                return value.item(), None  # Return as float with no unit change
            else:
                raise ValueError(f"Invalid value for scale/unit: {value}")

        # Update `name` and `description` if provided
        if "name" in kwargs:
            self.name = str(kwargs["name"])
        if "description" in kwargs:
            self.description = str(kwargs["description"])
        # Update `_plotconfig` parameters
        for key in ["tscale", "tunit", "lscale", "lunit", "Cscale", "Cunit"]:
            if key in kwargs:
                value = kwargs[key]

                if key in ["tscale", "lscale", "Cscale"]:
                    value, unit = checkunits(value)  # Process unit conversion
                    self._plotconfig[key] = value
                    if unit is not None:
                        self._plotconfig[key.replace("scale", "unit")] = unit  # Ensure unit consistency
                else:
                    self._plotconfig[key] = str(value)  # Convert unit strings directly
        return self  # Return self for method chaining if needed


    def rerun(self,name=None,color=None,linestyle=None,linewidth=None, container=None, **kwargs):
        """
        Rerun the simulation (while keeping everything unchanged)
            This function is intended to be used with layerLinks for updating internally the parameters.
            R.rerun() stores the updated simulation results in R
            Rupdate = R.rerun() returns a copy of R while updating R

        note: Use R.resume() to resume/continue a simulation not rerun, to be used for sensitivity analysis/fitting.
        """
        F = self._lastmedium
        P = self._lastmultilayer
        if not isinstance(F, foodphysics):
            raise TypeError(f"the current object is corrupted, _lastmedium is {type(self._lastmedium).__name__}")
        if not isinstance(P, layer):
            raise TypeError(f"the current object is corrupted, _lastmultilayer is {type(self._lastmultilayer).__name__}")
        container = self.comparison if container is None else container
        if not isinstance(container,CFSimulationContainer):
            raise TypeError(f"the container should be a CFSimulationContainer not a {type(CFSimulationContainer).__name__}")
        # rerun the simulation using unsecure restart data
        inputs = self.restart_unsecure.inputs # all previous inputs
        R = senspatankar(multilayer=inputs["multilayer"],
                              medium=inputs["medium"],
                              name=name if name is not None else inputs["name"],
                              description=kwargs.get("description",inputs["description"]),
                              t=kwargs.get("t",inputs["t"]),
                              autotime=kwargs.get("autotime",inputs["autotime"]),
                              timescale=kwargs.get("timescale",inputs["timescale"]),
                              Cxprevious=inputs["Cxprevious"],
                              ntimes=kwargs.get("ntimes",inputs["ntimes"]),
                              RelTol=kwargs.get("RelTol",inputs["RelTol"]),
                              AbsTol=kwargs.get("AbsTol",inputs["AbsTol"]),
                              container=container)
        # Update numeric data in self whith those in R
        self.t = R.t
        self.C = R.C
        self.CF = R.CF
        self.fc = R.fc
        self.f = R.f
        self.x = R.x
        self.Cx = R.Cx
        self.tC = R.tC
        self.C0eq = R.C0eq
        self.timebase = R.timebase
        self.discrete = R.discrete
        self.interp_CF = R.interp_CF
        self.CFtarget = R.CFtarget
        self.interp_Cx = R.interp_Cx
        self.Cxtarget = R.Cxtarget
        # Update label, color, linestyle, linewidth for the new curve (-1: last in the container)
        # note if name already exists, the previous content is replaced
        self.comparison.update(-1, label=name, color=color, linestyle=linestyle, linewidth=linewidth)
        return self # for chaining


    def resume(self,t=None,**kwargs):
        """
        Resume simulation for a new duration (with all parameters are unchanged)

        For convenience user overrides are provided as:
            parameter = value
            with parameter = "name","description"..."RelTol","AbsTol" (see senspantankar)
        Use specifically:
            CF0 to assign a different concentration for the food
            Cx0 (Cprofile object) to assign a different concentration profile (not recommended)
            medium to set a different medium (food) in contact
        """

        # retrieve previous results
        previousCF = self.restart.CF # CF at at target
        previousCx = self.restart.Cprofile # corresponding profile
        previousmedium = self.restart.inputs["medium"].copy()
        previousmedium.CF0 = previousCF # we apply the concentration
        # CF override with CF=new value
        isCF0forced = "CF0" in kwargs
        newmedium = kwargs.get("medium",previousmedium)
        if isCF0forced:
            newCF0 = kwargs.get("CF0",previousCF)
            newmedium.CF0 = newCF0
        if t is None:
            ttarget = newmedium.get_param("contacttime",(10,"days"),acceptNone=False)
            t = 2*ttarget
        # Concentration profile override with Cx0=new profile
        newCx0 = kwargs.get("Cx0",previousCx)
        if not isinstance(newCx0,Cprofile):
            raise TypeError(f"Cx0 should be a Cprofile object not a {type(newCx0).__name__}")

        # extend the existing solution
        inputs = self.restart.inputs # all previous inputs
        newsol = senspatankar(multilayer=inputs["multilayer"],
                              medium=newmedium,
                              name=kwargs.get("name",inputs["name"]),
                              description=kwargs.get("description",inputs["description"]),
                              t=t,
                              autotime=kwargs.get("autotime",inputs["autotime"]),
                              timescale=kwargs.get("timescale",inputs["timescale"]),
                              Cxprevious=newCx0,
                              ntimes=kwargs.get("ntimes",inputs["ntimes"]),
                              RelTol=kwargs.get("RelTol",inputs["RelTol"]),
                              AbsTol=kwargs.get("AbsTol",inputs["AbsTol"]))
        return newsol


    def copy(self):
        """
        Creates a deep copy of the current SensPatankarResult instance.

        Returns
        -------
        SensPatankarResult
            A new instance with identical attributes as the original.
        """
        return SensPatankarResult(
            name=self.name,
            description=self.description,
            ttarget=self.ttarget,
            t=self.t.copy(),
            C=self.C.copy(),
            CF=self.CF.copy(),
            fc=self.fc.copy(),
            f=self.f.copy(),
            x=self.x.copy(),
            Cx=self.Cx.copy(),
            tC=self.tC.copy(),
            C0eq=self.C0eq,
            timebase=self.timebase,
            restart=self.restart,
            restart_unsecure=self.restart_unsecure,
            xi=None,
            Cxi=None,
            _plotconfig=self._plotconfig,
            discrete=self.discrete
        )

    def chaining(self,multilayer,medium,**kwargs):
        sim = self.resume(multilayer=multilayer,medium=medium,**kwargs)
        medium.lastsimulation = sim # store the last simulation result in medium
        medium.lastinput = multilayer # store the last input (in medium)
        sim.savestate(multilayer,medium) # store store the inputs in sim for chaining
        return sim

    # overloading operation
    def __rshift__(self, medium):
        """Overloads >> to propagate migration to food."""
        if not isinstance(medium,foodphysics):
            raise TypeError(f"medium must be a foodphysics object not a {type(medium).__name__}")
        if not self._isstatesaved:
            raise RuntimeError("The previous inputs were not saved within the instance.")
        # we update the contact temperature (see example3)
        return self.chaining(medium>>self._lastmultilayer,medium,CF0=self.restart.CF)

    def __add__(self, other):
        """Concatenate two solutions"""
        if not isinstance(other, SensPatankarResult):
            raise TypeError("Can only add two SensPatankarResult objects")

        # Ensure compatibility of x-axis
        if not np.isclose(self.x[0], other.x[0]) or not np.isclose(self.x[-1], other.x[-1]):
            raise ValueError("Mismatch in x-axis boundaries between solutions")

        # Interpolate other.Cx onto self.x
        interp_Cx_other = interp1d(other.x, other.Cx.T, kind="linear", fill_value=0, axis=0)
        Cx_other_interp = interp_Cx_other(self.x).T  # Ensuring shape (ntimes, npoints)

        # Restrict times for valid merging
        valid_indices_self = self.t <= self.ttarget
        valid_indices_other = (other.t > 0) #& (other.t <= other.ttarget)
        t_self = self.t[valid_indices_self]
        t_other = other.t[valid_indices_other] + self.ttarget  # Shift time

        # Merge time arrays without duplicates
        t_merged = np.unique(np.concatenate((t_self, t_other)))
        tC_merged = np.unique(np.concatenate((self.tC[valid_indices_self], other.tC[valid_indices_other])))

        # Merge concentration-related attributes
        C_merged = np.concatenate((self.C[valid_indices_self], other.C[valid_indices_other]))
        CF_merged = np.concatenate((self.CF[valid_indices_self], other.CF[valid_indices_other]))
        fc_merged = np.concatenate((self.fc[valid_indices_self], other.fc[valid_indices_other]))
        f_merged = np.concatenate((self.f[valid_indices_self], other.f[valid_indices_other]))

        # Merge concentration profiles
        Cx_merged = np.vstack((self.Cx[valid_indices_self], Cx_other_interp[valid_indices_other]))

        # Merged description
        if self.description and other.description:
            merged_description = f"Merged: {self.description} & {other.description}"
        elif self.description:
            merged_description = self.description
        elif other.description:
            merged_description = other.description
        else:
            merged_description = ""

        # Create new instance with merged data
        merged_result = SensPatankarResult(
            name=f"{self.name} + {other.name}" if self.name!=other.name else self.name,
            description=merged_description,
            ttarget=self.ttarget + other.ttarget,
            t=t_merged,
            C=C_merged,
            CF=CF_merged,
            fc=fc_merged,
            f=f_merged,
            x=self.x,  # Keep self.x as reference
            Cx=Cx_merged,
            tC=tC_merged,
            C0eq=self.C0eq,  # Keep self.C0eq
            timebase=other.timebase,  # Take timebase from other
            restart=other.restart,  # Take restart from other (the last valid one)
            restart_unsecure=other.restart_unsecure,  # Take restart from other (the last valid one)
            xi=None,  # xi and Cxi values are available
            Cxi=None  # only from a fresh simulation
        )

        return merged_result

    def interpolate_CF(self, t, kind="linear", fill_value="extrapolate"):
        """
        Interpolates the concentration in the food (CF) at given time(s).

        Parameters
        ----------
        t : float, list, tuple, or ndarray
            Time(s) at which to interpolate CF values.
            - If a tuple, it should be (value or list, unit) and will be converted to SI.
            - If a scalar or list, it is assumed to be in SI units already.
        kind : str, optional
            Interpolation method. Default is "linear".
            Possible values:
            - "linear": Piecewise linear interpolation (default).
            - "nearest": Nearest-neighbor interpolation.
            - "zero": Zero-order spline interpolation.
            - "slinear", "quadratic", "cubic": Spline interpolations of various orders.
        fill_value : str or float, optional
            Specifies how to handle values outside the given range.
            - "extrapolate" (default): Extrapolates values beyond available data.
            - Any float: Uses a constant value for out-of-bounds interpolation.

        Returns
        -------
        ndarray
            Interpolated CF values at the requested time(s).
        """
        # Convert time input to SI units if provided as a tuple
        if isinstance(t, tuple):
            t, _ = check_units(t)  # Convert to numeric array

        # Ensure t is a NumPy array for vectorized operations
        t = np.atleast_1d(t)

        # Create the interpolant on demand with user-defined settings
        interp_function = interp1d(self.t, self.CF, kind=kind, fill_value=fill_value, bounds_error=False)

        # Return interpolated values
        return interp_function(t)


    def __repr__(self):
        ntimes = len(self.t)
        nx = self.Cx.shape[1] if self.Cx.ndim > 1 else len(self.x)
        tmin, tmax = self.t.min(), self.t.max()
        xmin, xmax = self.x.min(), self.x.max()

        print(f"SensPatankarResult: {self.name}\n"
              f"\t {self.description if self.description != '' else '<no description>'}\n"
              f"\t - with {ntimes} time steps\n",
              f"\t - with {nx} spatial points\n"
              f"\t - Time range: [{tmin:.2e}, {tmax:.2e}] s\n"
              f"\t - Position range: [{xmin:.2e}, {xmax:.2e}] m")

        return str(self)


    def __str__(self):
        return (f'<{self.__class__.__name__}:{self.name}: '
            f'CF({(self.ttarget / plotconfig["tscale"]).item():.4g} [{plotconfig["tunit"]}]) = '
            f'{(self.CFtarget / plotconfig["Cscale"]).item():.4g} [{plotconfig["Cunit"]}]>')



    def plotCF(self, t=None, trange=None):
        """
        Plot the concentration in the food (CF) as a function of time.

        - If `self.discrete` is True, plots discrete points.
        - If `self.discrete` is False, plots a continuous curve.
        - Highlights the target time(s).

        Parameters
        ----------
        t : float, list, or None, optional
            Specific time(s) for which the concentration should be highlighted.
            If None, defaults to `ttarget`.
        trange : None, float, or list [t_min, t_max], optional
            If None, the full profile is shown.
            If a float, it is treated as an upper bound (lower bound assumed 0).
            If a list `[t_min, t_max]`, the profile is limited to that range.
        """
        plt.rc('text', usetex=False) # Enable LaTeX formatting for Matplotlib
        # Extract plot configuration
        plotconfig = self._plotconfig
        # Ensure t is a list (even if a single value is given)
        if t is None:
            t_values = [self.ttarget]
        elif isinstance(t, (int, float)):
            t_values = [t]
        elif isinstance(t, np.ndarray):
            t_values = t.flatten()
        elif isinstance(t, tuple):
            t_values = check_units(t)[0]
        else:
            t_values = np.array(t)  # Convert to array
        # Interpolate CF values at given times
        CF_t_values = self.interp_CF(t_values)
        # Handle trange selection
        if trange is None:
            t_plot = self.t
            CF_plot = self.CF
        else:
            # Convert trange to a valid range
            if isinstance(trange, (int, float)):
                trange = [0, trange]  # Assume lower bound is 0
            elif len(trange) != 2:
                raise ValueError("trange must be None, a single float (upper bound), or a list of two values [t_min, t_max]")
            # Validate range
            t_min, t_max = trange
            if t_min < self.t.min() or t_max > self.t.max():
                print("Warning: trange values are outside the available time range and may cause extrapolation.")
            # Generate time values within range
            mask = (self.t >= t_min) & (self.t <= t_max)
            t_plot = self.t[mask]
            CF_plot = self.CF[mask]
        # Set up colormap for multiple target values
        cmap = plt.get_cmap('viridis', len(t_values))
        norm = mcolors.Normalize(vmin=min(t_values), vmax=max(t_values))
        # Create the figure
        fig, ax = plt.subplots(figsize=(8, 6))
        # Plot behavior depends on whether data is discrete
        if self.discrete:
            ax.scatter(t_plot / plotconfig["tscale"], CF_plot / plotconfig["Cscale"],
                       color='b', label='Concentration in Food (Discrete)', marker='o', alpha=0.7)
        else:
            ax.plot(t_plot / plotconfig["tscale"], CF_plot / plotconfig["Cscale"],
                    label='Concentration in Food', color='b')
        # Highlight each target time
        for i, tC in enumerate(t_values):
            color = tooclear(cmap(norm(tC))) if len(t_values) > 1 else 'r'  # Use colormap only if multiple t values

            # Vertical and horizontal lines
            ax.axvline(tC / plotconfig["tscale"], color=color, linestyle='--', linewidth=1)
            ax.axhline(CF_t_values[i] / plotconfig["Cscale"], color=color, linestyle='--', linewidth=1)
            # Highlight points
            ax.scatter(tC / plotconfig["tscale"], CF_t_values[i] / plotconfig["Cscale"],
                       color=color, edgecolor='black', zorder=3, marker='D')
            # Annotate time
            ax.text(tC / plotconfig["tscale"], min(CF_plot) / plotconfig["Cscale"],
                    f'{(tC / plotconfig["tscale"]).item():.2f} {plotconfig["tunit"]}',
                    verticalalignment='bottom', horizontalalignment='right', rotation=90, fontsize=10, color=color)
            # Annotate concentration
            ax.text(min(t_plot) / plotconfig["tscale"], CF_t_values[i] / plotconfig["Cscale"],
                    f'{(CF_t_values[i] / plotconfig["Cscale"]).item():.2f} {plotconfig["Cunit"]}',
                    verticalalignment='bottom', horizontalalignment='left', fontsize=10, color=color)
        # Labels and title
        ax.set_xlabel(f'Time [{plotconfig["tunit"]}]')
        ax.set_ylabel(f'Concentration in Food [{plotconfig["Cunit"]}]')
        title_main = "Concentration in Food vs. Time"
        if self.discrete:
            title_main += " (Discrete Data)"
        title_sub = rf"$\bf{{{self.name}}}$" + (f": {self.description}" if self.description else "")
        ax.set_title(f"{title_main}\n{title_sub}", fontsize=10)
        #ax.text(0.5, 1.05, title_sub, fontsize=8, ha="center", va="bottom", transform=ax.transAxes)
        ax.legend()
        ax.grid(True)
        plt.show()
        # Store metadata
        setattr(fig, _fig_metadata_atrr_, f"pltCF_{self.name}")
        return fig



    def plotCx(self, t=None, nmax=15):
        """
        Plot the concentration profiles (Cx) in the packaging vs. position (x) for different times,
        using a color gradient similar to Parula, based on time values (not index order).
        Additionally, highlight the concentration profile at `ttarget` with a thick black line.

        Parameters
        ----------
        t : list, array-like, or None, optional
            List of specific times to plot. Only valid values (inside self.t) are used.
            If None, time values are selected using sqrt-spaced distribution.
        nmax : int, optional
            Maximum number of profiles to plot. The default is 15.
        """
        plt.rc('text', usetex=False) # Enable LaTeX formatting for Matplotlib
        # short circuit
        if self.discrete:
            print("discrete SensPatankarResult instance does not contain profile data, nothing to plot.")
            return None
        # extract plotconfig
        plotconfig = self._plotconfig
        # Ensure time values are within the available time range
        if t is None:
            # Default: Select `nmax` time values using sqrt-spacing
            nt = len(self.t)
            if nt <= nmax:
                t_values = self.t
            else:
                sqrt_t = np.sqrt(self.t)
                sqrt_t_values = np.linspace(sqrt_t[0], sqrt_t[-1], nmax)
                t_values = sqrt_t_values**2
        else:
            # Use user-specified time values
            if isinstance(t,tuple):
                t_values = check_units(t)[0]
            else:
                t_values = np.array(t)
            # Keep only valid times inside `self.t`
            t_values = t_values[(t_values >= self.t.min()) & (t_values <= self.t.max())]
            if len(t_values) == 0:
                print("Warning: No valid time values found in the specified range.")
                return
            # If more than `nmax`, keep the first `nmax` values
            t_values = t_values[:nmax]
        # Normalize time for colormap (Ensure at least one valid value)
        norm = mcolors.Normalize(vmin=t_values.min()/plotconfig["tscale"],
                                 vmax=t_values.max()/plotconfig["tscale"]) if len(t_values) > 1 \
            else mcolors.Normalize(vmin=self.t.min()/plotconfig["tscale"],
                                   vmax=self.t.max()/plotconfig["tscale"])
        cmap = plt.get_cmap('viridis', nmax)  # 'viridis' is similar to Parula
        # new figure
        fig, ax = plt.subplots(figsize=(8, 6))  # Explicitly create a figure and axis
        # Plot all valid concentration profiles with time-based colormap
        for tC in t_values:
            C = self.interp_Cx(tC)
            color = tooclear(cmap(norm(tC/plotconfig["tscale"])))  # Get color from colormap
            ax.plot(self.x / plotconfig["lscale"], C / plotconfig["Cscale"],
                    color=color, alpha=0.9, label=f't={tC / plotconfig["tscale"]:.3g} {plotconfig["tunit"]}')
        # Highlight concentration profile at `ttarget`
        ax.plot(self.x / plotconfig["lscale"], self.Cxtarget / plotconfig["Cscale"], 'k-', linewidth=3,
                label=f't={self.ttarget[0] / plotconfig["tscale"]:.2g} {plotconfig["tunit"]} (target)')
        # Create ScalarMappable and add colorbar
        sm = cm.ScalarMappable(cmap=cmap, norm=norm)
        sm.set_array([])  # Needed for colorbar
        cbar = fig.colorbar(sm, ax=ax)  # Explicitly associate colorbar with axis
        cbar.set_label(f'Time [{plotconfig["tunit"]}]')
        ax.set_xlabel(f'Position [{plotconfig["lunit"]}]')
        ax.set_ylabel(f'Concentration in Packaging [{plotconfig["Cunit"]}]')
        title_main = "Concentration Profiles in Packaging vs. Position"
        title_sub = rf"$\bf{{{self.name}}}$" + (f": {self.description}" if self.description else "")
        ax.set_title(f"{title_main}\n{title_sub}", fontsize=10)
        ax.text(0.5, 1.05, title_sub, fontsize=8, ha="center", va="bottom", transform=ax.transAxes)
        ax.set_title(title_main)
        ax.grid(True)
        ax.legend()
        plt.show()
        # store metadata
        setattr(fig,_fig_metadata_atrr_,f"pltCx_{self.name}")
        return fig

Instance variables

var currrentdistance

returns the square distance to the last distance pair

Expand source code
@property
def currrentdistance(self):
    """returns the square distance to the last distance pair"""
    return self.distanceSq(self._distancepair) if self._distancepair is not None else None

Methods

def chaining(self, multilayer, medium, **kwargs)
Expand source code
def chaining(self,multilayer,medium,**kwargs):
    sim = self.resume(multilayer=multilayer,medium=medium,**kwargs)
    medium.lastsimulation = sim # store the last simulation result in medium
    medium.lastinput = multilayer # store the last input (in medium)
    sim.savestate(multilayer,medium) # store store the inputs in sim for chaining
    return sim
def copy(self)

Creates a deep copy of the current SensPatankarResult instance.

Returns

SensPatankarResult
A new instance with identical attributes as the original.
Expand source code
def copy(self):
    """
    Creates a deep copy of the current SensPatankarResult instance.

    Returns
    -------
    SensPatankarResult
        A new instance with identical attributes as the original.
    """
    return SensPatankarResult(
        name=self.name,
        description=self.description,
        ttarget=self.ttarget,
        t=self.t.copy(),
        C=self.C.copy(),
        CF=self.CF.copy(),
        fc=self.fc.copy(),
        f=self.f.copy(),
        x=self.x.copy(),
        Cx=self.Cx.copy(),
        tC=self.tC.copy(),
        C0eq=self.C0eq,
        timebase=self.timebase,
        restart=self.restart,
        restart_unsecure=self.restart_unsecure,
        xi=None,
        Cxi=None,
        _plotconfig=self._plotconfig,
        discrete=self.discrete
    )
def distanceSq(self, other, std_relative=0.05, npoints=100, cum=True)

Compute the squared distance between two SensPatankarResult instances.

Parameters

other : SensPatankarResult
The other instance to compare against.
std_relative : float, optional
Relative standard deviation for normalization (default: 0.05).
npoints : int, optional
Number of points for interpolation if both are continuous (default: 100).
cum : bool, optional
If True, return the cumulative sum; otherwise, return pointwise values.

Returns

float or np.ndarray
The squared normalized error.

Raises

TypeError
If other is not an instance of SensPatankarResult.
ValueError
If the time ranges do not overlap or if discrete instances have different time points.
Expand source code
def distanceSq(self, other, std_relative=0.05, npoints=100, cum=True):
    """
    Compute the squared distance between two SensPatankarResult instances.

    Parameters
    ----------
    other : SensPatankarResult
        The other instance to compare against.
    std_relative : float, optional
        Relative standard deviation for normalization (default: 0.05).
    npoints : int, optional
        Number of points for interpolation if both are continuous (default: 100).
    cum : bool, optional
        If True, return the cumulative sum; otherwise, return pointwise values.

    Returns
    -------
    float or np.ndarray
        The squared normalized error.

    Raises
    ------
    TypeError
        If `other` is not an instance of SensPatankarResult.
    ValueError
        If the time ranges do not overlap or if discrete instances have different time points.
    """
    if not isinstance(other, SensPatankarResult):
        raise TypeError(f"other must be a SensPatankarResult not a {type(other).__name__}")

    # refresh
    self._distancepair = other # used for distance evaluation as self.currentdistance
    # Find common time range
    tmin, tmax = max(self.t.min(), other.t.min()), min(self.t.max(), other.t.max())
    if tmin >= tmax:
        raise ValueError("No overlapping time range between instances.")
    if not self.discrete and not other.discrete:
        # Case 1: Both are continuous
        t_common = np.linspace(tmin, tmax, npoints)
        CF_self = self.interp_CF(t_common)
        CF_other = other.interp_CF(t_common)
    elif self.discrete and not other.discrete:
        # Case 2: self is discrete, other is continuous
        t_common = self.t
        CF_self = self.CF
        CF_other = other.interp_CF(self.t)
    elif not self.discrete and other.discrete:
        # Case 3: self is continuous, other is discrete
        t_common = other.t
        CF_self = self.interp_CF(other.t)
        CF_other = other.CF
    else:
        # Case 4: Both are discrete
        if not np.array_equal(self.t, other.t):
            raise ValueError("Discrete instances must have the same time points.")
        t_common = self.t
        CF_self = self.CF
        CF_other = other.CF
    # Compute squared normalized error
    m = (CF_self + CF_other) / 2
    m[m == 0] = 1  # Avoid division by zero, results in zero error where both are zero
    e2 = ((CF_self - CF_other) / (m * std_relative)) ** 2
    return np.sum(e2) if cum else e2
def fit(self, other, disp=True, std_relative=0.05, maxiter=100, xatol=0.001, fatol=0.001)

Fits simulation parameters D and k to fit a discrete CF data

Expand source code
def fit(self,other,disp=True,std_relative=0.05,maxiter=100,xatol=1e-3,fatol=1e-3):
    """Fits simulation parameters D and k to fit a discrete CF data"""
    if not isinstance(other,SensPatankarResult):
        raise TypeError(f"other must be a SensPatankarResult not a {type(other).__name__}")
    if self.discrete:
        raise ValueError("the current instance contains discrete data, use it as other")
    if not other.discrete:
        raise ValueError("only discrete CF results can be fitted")
    # retrieve current Dlink and klink
    Dlink = self.restart_unsecure.inputs["multilayer"].Dlink
    klink = self.restart_unsecure.inputs["multilayer"].klink
    if Dlink is None and klink is None:
        raise ValueError("provide at least a Dlink or klink object")
    if Dlink is not None and not isinstance(Dlink,layerLink):
        raise TypeError(f"Dlink must be a layerLink not a {type(Dlink).__name__}")
    if klink is not None and not isinstance(klink,layerLink):
        raise TypeError(f"klink must be a layerLink not a {type(klink).__name__}")
    # options for the optimizer
    optimOptions = {"disp": disp, "maxiter": maxiter, "xatol": xatol, "fatol": fatol}
    # params is assembled by concatenating -log(Dlink.values) and log(klink.values)
    params_initial = np.concatenate((-np.log(Dlink.values),np.log(klink.values)))
    maskD = np.concatenate((np.ones(Dlink.nzlength, dtype=bool), np.zeros(klink.nzlength, dtype=bool)))
    maskk = np.concatenate((np.zeros(Dlink.nzlength, dtype=bool), np.ones(klink.nzlength, dtype=bool)))
    # distance criterion
    d2 = lambda: self.distanceSq(other, std_relative=0.05) # d2 = lambda: self - other works also
    def objective(params):
        """objective function, all parameters are passed via layerLink"""
        logD = params[maskD]
        logk = params[maskk]
        Dlink.values = np.exp(-logD)
        klink.values = np.exp(logk)
        self.rerun(name="optimizer",color="OrangeRed",linewidth=4)
        return d2()
    def callback(params):
        """Called at each iteration to display current values."""
        Dtmp, ktmp = np.exp(-params[maskD]), np.exp(params[maskk])
        print("Fitting Iteration:\n",f"D={Dtmp} [m²/s]\n",f"k={ktmp} [a.u.]\n")
    # do the optimization
    result = minimize(objective,
                      params_initial,
                      method='Nelder-Mead',
                      callback=callback,
                      options=optimOptions)
    # extract the solution, be sure it is updated
    Dlink.values, klink.values = np.exp(-result.x[maskD]), np.exp(result.x[maskk])
    return result
def interpolate_CF(self, t, kind='linear', fill_value='extrapolate')

Interpolates the concentration in the food (CF) at given time(s).

Parameters

t : float, list, tuple, or ndarray
Time(s) at which to interpolate CF values. - If a tuple, it should be (value or list, unit) and will be converted to SI. - If a scalar or list, it is assumed to be in SI units already.
kind : str, optional
Interpolation method. Default is "linear". Possible values: - "linear": Piecewise linear interpolation (default). - "nearest": Nearest-neighbor interpolation. - "zero": Zero-order spline interpolation. - "slinear", "quadratic", "cubic": Spline interpolations of various orders.
fill_value : str or float, optional
Specifies how to handle values outside the given range. - "extrapolate" (default): Extrapolates values beyond available data. - Any float: Uses a constant value for out-of-bounds interpolation.

Returns

ndarray
Interpolated CF values at the requested time(s).
Expand source code
def interpolate_CF(self, t, kind="linear", fill_value="extrapolate"):
    """
    Interpolates the concentration in the food (CF) at given time(s).

    Parameters
    ----------
    t : float, list, tuple, or ndarray
        Time(s) at which to interpolate CF values.
        - If a tuple, it should be (value or list, unit) and will be converted to SI.
        - If a scalar or list, it is assumed to be in SI units already.
    kind : str, optional
        Interpolation method. Default is "linear".
        Possible values:
        - "linear": Piecewise linear interpolation (default).
        - "nearest": Nearest-neighbor interpolation.
        - "zero": Zero-order spline interpolation.
        - "slinear", "quadratic", "cubic": Spline interpolations of various orders.
    fill_value : str or float, optional
        Specifies how to handle values outside the given range.
        - "extrapolate" (default): Extrapolates values beyond available data.
        - Any float: Uses a constant value for out-of-bounds interpolation.

    Returns
    -------
    ndarray
        Interpolated CF values at the requested time(s).
    """
    # Convert time input to SI units if provided as a tuple
    if isinstance(t, tuple):
        t, _ = check_units(t)  # Convert to numeric array

    # Ensure t is a NumPy array for vectorized operations
    t = np.atleast_1d(t)

    # Create the interpolant on demand with user-defined settings
    interp_function = interp1d(self.t, self.CF, kind=kind, fill_value=fill_value, bounds_error=False)

    # Return interpolated values
    return interp_function(t)
def plotCF(self, t=None, trange=None)

Plot the concentration in the food (CF) as a function of time.

  • If self.discrete is True, plots discrete points.
  • If self.discrete is False, plots a continuous curve.
  • Highlights the target time(s).

Parameters

t : float, list, or None, optional
Specific time(s) for which the concentration should be highlighted. If None, defaults to ttarget.
trange : None, float, or list [t_min, t_max], optional
If None, the full profile is shown. If a float, it is treated as an upper bound (lower bound assumed 0). If a list [t_min, t_max], the profile is limited to that range.
Expand source code
def plotCF(self, t=None, trange=None):
    """
    Plot the concentration in the food (CF) as a function of time.

    - If `self.discrete` is True, plots discrete points.
    - If `self.discrete` is False, plots a continuous curve.
    - Highlights the target time(s).

    Parameters
    ----------
    t : float, list, or None, optional
        Specific time(s) for which the concentration should be highlighted.
        If None, defaults to `ttarget`.
    trange : None, float, or list [t_min, t_max], optional
        If None, the full profile is shown.
        If a float, it is treated as an upper bound (lower bound assumed 0).
        If a list `[t_min, t_max]`, the profile is limited to that range.
    """
    plt.rc('text', usetex=False) # Enable LaTeX formatting for Matplotlib
    # Extract plot configuration
    plotconfig = self._plotconfig
    # Ensure t is a list (even if a single value is given)
    if t is None:
        t_values = [self.ttarget]
    elif isinstance(t, (int, float)):
        t_values = [t]
    elif isinstance(t, np.ndarray):
        t_values = t.flatten()
    elif isinstance(t, tuple):
        t_values = check_units(t)[0]
    else:
        t_values = np.array(t)  # Convert to array
    # Interpolate CF values at given times
    CF_t_values = self.interp_CF(t_values)
    # Handle trange selection
    if trange is None:
        t_plot = self.t
        CF_plot = self.CF
    else:
        # Convert trange to a valid range
        if isinstance(trange, (int, float)):
            trange = [0, trange]  # Assume lower bound is 0
        elif len(trange) != 2:
            raise ValueError("trange must be None, a single float (upper bound), or a list of two values [t_min, t_max]")
        # Validate range
        t_min, t_max = trange
        if t_min < self.t.min() or t_max > self.t.max():
            print("Warning: trange values are outside the available time range and may cause extrapolation.")
        # Generate time values within range
        mask = (self.t >= t_min) & (self.t <= t_max)
        t_plot = self.t[mask]
        CF_plot = self.CF[mask]
    # Set up colormap for multiple target values
    cmap = plt.get_cmap('viridis', len(t_values))
    norm = mcolors.Normalize(vmin=min(t_values), vmax=max(t_values))
    # Create the figure
    fig, ax = plt.subplots(figsize=(8, 6))
    # Plot behavior depends on whether data is discrete
    if self.discrete:
        ax.scatter(t_plot / plotconfig["tscale"], CF_plot / plotconfig["Cscale"],
                   color='b', label='Concentration in Food (Discrete)', marker='o', alpha=0.7)
    else:
        ax.plot(t_plot / plotconfig["tscale"], CF_plot / plotconfig["Cscale"],
                label='Concentration in Food', color='b')
    # Highlight each target time
    for i, tC in enumerate(t_values):
        color = tooclear(cmap(norm(tC))) if len(t_values) > 1 else 'r'  # Use colormap only if multiple t values

        # Vertical and horizontal lines
        ax.axvline(tC / plotconfig["tscale"], color=color, linestyle='--', linewidth=1)
        ax.axhline(CF_t_values[i] / plotconfig["Cscale"], color=color, linestyle='--', linewidth=1)
        # Highlight points
        ax.scatter(tC / plotconfig["tscale"], CF_t_values[i] / plotconfig["Cscale"],
                   color=color, edgecolor='black', zorder=3, marker='D')
        # Annotate time
        ax.text(tC / plotconfig["tscale"], min(CF_plot) / plotconfig["Cscale"],
                f'{(tC / plotconfig["tscale"]).item():.2f} {plotconfig["tunit"]}',
                verticalalignment='bottom', horizontalalignment='right', rotation=90, fontsize=10, color=color)
        # Annotate concentration
        ax.text(min(t_plot) / plotconfig["tscale"], CF_t_values[i] / plotconfig["Cscale"],
                f'{(CF_t_values[i] / plotconfig["Cscale"]).item():.2f} {plotconfig["Cunit"]}',
                verticalalignment='bottom', horizontalalignment='left', fontsize=10, color=color)
    # Labels and title
    ax.set_xlabel(f'Time [{plotconfig["tunit"]}]')
    ax.set_ylabel(f'Concentration in Food [{plotconfig["Cunit"]}]')
    title_main = "Concentration in Food vs. Time"
    if self.discrete:
        title_main += " (Discrete Data)"
    title_sub = rf"$\bf{{{self.name}}}$" + (f": {self.description}" if self.description else "")
    ax.set_title(f"{title_main}\n{title_sub}", fontsize=10)
    #ax.text(0.5, 1.05, title_sub, fontsize=8, ha="center", va="bottom", transform=ax.transAxes)
    ax.legend()
    ax.grid(True)
    plt.show()
    # Store metadata
    setattr(fig, _fig_metadata_atrr_, f"pltCF_{self.name}")
    return fig
def plotCx(self, t=None, nmax=15)

Plot the concentration profiles (Cx) in the packaging vs. position (x) for different times, using a color gradient similar to Parula, based on time values (not index order). Additionally, highlight the concentration profile at ttarget with a thick black line.

Parameters

t : list, array-like, or None, optional
List of specific times to plot. Only valid values (inside self.t) are used. If None, time values are selected using sqrt-spaced distribution.
nmax : int, optional
Maximum number of profiles to plot. The default is 15.
Expand source code
def plotCx(self, t=None, nmax=15):
    """
    Plot the concentration profiles (Cx) in the packaging vs. position (x) for different times,
    using a color gradient similar to Parula, based on time values (not index order).
    Additionally, highlight the concentration profile at `ttarget` with a thick black line.

    Parameters
    ----------
    t : list, array-like, or None, optional
        List of specific times to plot. Only valid values (inside self.t) are used.
        If None, time values are selected using sqrt-spaced distribution.
    nmax : int, optional
        Maximum number of profiles to plot. The default is 15.
    """
    plt.rc('text', usetex=False) # Enable LaTeX formatting for Matplotlib
    # short circuit
    if self.discrete:
        print("discrete SensPatankarResult instance does not contain profile data, nothing to plot.")
        return None
    # extract plotconfig
    plotconfig = self._plotconfig
    # Ensure time values are within the available time range
    if t is None:
        # Default: Select `nmax` time values using sqrt-spacing
        nt = len(self.t)
        if nt <= nmax:
            t_values = self.t
        else:
            sqrt_t = np.sqrt(self.t)
            sqrt_t_values = np.linspace(sqrt_t[0], sqrt_t[-1], nmax)
            t_values = sqrt_t_values**2
    else:
        # Use user-specified time values
        if isinstance(t,tuple):
            t_values = check_units(t)[0]
        else:
            t_values = np.array(t)
        # Keep only valid times inside `self.t`
        t_values = t_values[(t_values >= self.t.min()) & (t_values <= self.t.max())]
        if len(t_values) == 0:
            print("Warning: No valid time values found in the specified range.")
            return
        # If more than `nmax`, keep the first `nmax` values
        t_values = t_values[:nmax]
    # Normalize time for colormap (Ensure at least one valid value)
    norm = mcolors.Normalize(vmin=t_values.min()/plotconfig["tscale"],
                             vmax=t_values.max()/plotconfig["tscale"]) if len(t_values) > 1 \
        else mcolors.Normalize(vmin=self.t.min()/plotconfig["tscale"],
                               vmax=self.t.max()/plotconfig["tscale"])
    cmap = plt.get_cmap('viridis', nmax)  # 'viridis' is similar to Parula
    # new figure
    fig, ax = plt.subplots(figsize=(8, 6))  # Explicitly create a figure and axis
    # Plot all valid concentration profiles with time-based colormap
    for tC in t_values:
        C = self.interp_Cx(tC)
        color = tooclear(cmap(norm(tC/plotconfig["tscale"])))  # Get color from colormap
        ax.plot(self.x / plotconfig["lscale"], C / plotconfig["Cscale"],
                color=color, alpha=0.9, label=f't={tC / plotconfig["tscale"]:.3g} {plotconfig["tunit"]}')
    # Highlight concentration profile at `ttarget`
    ax.plot(self.x / plotconfig["lscale"], self.Cxtarget / plotconfig["Cscale"], 'k-', linewidth=3,
            label=f't={self.ttarget[0] / plotconfig["tscale"]:.2g} {plotconfig["tunit"]} (target)')
    # Create ScalarMappable and add colorbar
    sm = cm.ScalarMappable(cmap=cmap, norm=norm)
    sm.set_array([])  # Needed for colorbar
    cbar = fig.colorbar(sm, ax=ax)  # Explicitly associate colorbar with axis
    cbar.set_label(f'Time [{plotconfig["tunit"]}]')
    ax.set_xlabel(f'Position [{plotconfig["lunit"]}]')
    ax.set_ylabel(f'Concentration in Packaging [{plotconfig["Cunit"]}]')
    title_main = "Concentration Profiles in Packaging vs. Position"
    title_sub = rf"$\bf{{{self.name}}}$" + (f": {self.description}" if self.description else "")
    ax.set_title(f"{title_main}\n{title_sub}", fontsize=10)
    ax.text(0.5, 1.05, title_sub, fontsize=8, ha="center", va="bottom", transform=ax.transAxes)
    ax.set_title(title_main)
    ax.grid(True)
    ax.legend()
    plt.show()
    # store metadata
    setattr(fig,_fig_metadata_atrr_,f"pltCx_{self.name}")
    return fig
def pseudoexperiment(self, npoints=25, std_relative=0.05, randomtime=False, autorecord=False, seed=None, t=None, CF=None, scale='linear')

Generates discrete pseudo-experimental data from high-resolution simulated results.

Parameters

npoints : int, optional
Number of discrete time points to select (default: 25).
std_relative : float, optional
Relative standard deviation for added noise (default: 0.05).
randomtime : bool, optional
If True, picks random time points; otherwise, uses uniform spacing or a sqrt scale (default: False).
autorecord : bool, optional
If True, automatically adds the generated result to the container (default: False).
seed : int, optional
Random seed for reproducibility.
t : list or np.ndarray, optional
Specific time points to use instead of generated ones. If provided, CF must also be supplied.
CF : list or np.ndarray, optional
Specific CF values to use at the provided t time points. Must have the same length as t.
scale : str, optional
Determines how time points are distributed when randomtime=False: - "linear" (default): Uniformly spaced time points. - "sqrt": Time points are distributed more densely at the beginning using a square root scale.

Returns

SensPatankarResult
A new SensPatankarResult object flagged as discrete.

Raises

ValueError
If t and CF are provided but have mismatched lengths.
Expand source code
def pseudoexperiment(self, npoints=25, std_relative=0.05, randomtime=False, autorecord=False, seed=None, t=None, CF=None, scale='linear'):
    """
    Generates discrete pseudo-experimental data from high-resolution simulated results.

    Parameters
    ----------
    npoints : int, optional
        Number of discrete time points to select (default: 25).
    std_relative : float, optional
        Relative standard deviation for added noise (default: 0.05).
    randomtime : bool, optional
        If True, picks random time points; otherwise, uses uniform spacing or a sqrt scale (default: False).
    autorecord : bool, optional
        If True, automatically adds the generated result to the container (default: False).
    seed : int, optional
        Random seed for reproducibility.
    t : list or np.ndarray, optional
        Specific time points to use instead of generated ones. If provided, `CF` must also be supplied.
    CF : list or np.ndarray, optional
        Specific CF values to use at the provided `t` time points. Must have the same length as `t`.
    scale : str, optional
        Determines how time points are distributed when `randomtime=False`:
        - "linear" (default): Uniformly spaced time points.
        - "sqrt": Time points are distributed more densely at the beginning using a square root scale.

    Returns
    -------
    SensPatankarResult
        A new SensPatankarResult object flagged as discrete.

    Raises
    ------
    ValueError
        If `t` and `CF` are provided but have mismatched lengths.
    """

    if seed is not None:
        np.random.seed(seed)

    if t is not None:
        t_discrete = np.array(t, dtype=float)
        if CF is None or len(CF) != len(t_discrete):
            raise ValueError("When providing t, CF values must be provided and have the same length.")
        CF_discrete_noisy = np.array(CF, dtype=float)
    else:
        if randomtime:
            t_discrete = np.sort(np.random.uniform(self.t.min(), self.t.max(), npoints))
        else:
            if scale == 'sqrt':
                t_discrete = np.linspace(np.sqrt(self.t.min()), np.sqrt(self.t.max()), npoints) ** 2
            else:
                t_discrete = np.linspace(self.t.min(), self.t.max(), npoints)

        CF_discrete = self.interp_CF(t_discrete)
        noise = np.random.normal(loc=0, scale=std_relative * CF_discrete)
        CF_discrete_noisy = CF_discrete + noise
        CF_discrete_noisy = np.clip(CF_discrete_noisy, a_min=0, a_max=None)

    discrete_result = SensPatankarResult(
        name=f"{self.name}_discrete",
        description=f"Discrete pseudo-experimental data from {self.name}",
        ttarget=self.ttarget,
        t=t_discrete,
        C=np.zeros_like(t_discrete),
        CF=CF_discrete_noisy,
        fc=np.zeros_like(t_discrete),
        f=np.zeros_like(t_discrete),
        x=self.x,
        Cx=np.zeros((len(t_discrete), len(self.x))),
        tC=self.tC,
        C0eq=self.C0eq,
        timebase=self.timebase,
        restart=self.restart,
        restart_unsecure=self.restart_unsecure,
        xi=None,
        Cxi=None,
        _plotconfig=self._plotconfig,
        discrete=True
    )
    if autorecord:
        self.comparison.add(discrete_result, label="pseudo-experiment", color="black", marker='o', discrete=True)
    return discrete_result
def rerun(self, name=None, color=None, linestyle=None, linewidth=None, container=None, **kwargs)

Rerun the simulation (while keeping everything unchanged) This function is intended to be used with layerLinks for updating internally the parameters. R.rerun() stores the updated simulation results in R Rupdate = R.rerun() returns a copy of R while updating R

note: Use R.resume() to resume/continue a simulation not rerun, to be used for sensitivity analysis/fitting.

Expand source code
def rerun(self,name=None,color=None,linestyle=None,linewidth=None, container=None, **kwargs):
    """
    Rerun the simulation (while keeping everything unchanged)
        This function is intended to be used with layerLinks for updating internally the parameters.
        R.rerun() stores the updated simulation results in R
        Rupdate = R.rerun() returns a copy of R while updating R

    note: Use R.resume() to resume/continue a simulation not rerun, to be used for sensitivity analysis/fitting.
    """
    F = self._lastmedium
    P = self._lastmultilayer
    if not isinstance(F, foodphysics):
        raise TypeError(f"the current object is corrupted, _lastmedium is {type(self._lastmedium).__name__}")
    if not isinstance(P, layer):
        raise TypeError(f"the current object is corrupted, _lastmultilayer is {type(self._lastmultilayer).__name__}")
    container = self.comparison if container is None else container
    if not isinstance(container,CFSimulationContainer):
        raise TypeError(f"the container should be a CFSimulationContainer not a {type(CFSimulationContainer).__name__}")
    # rerun the simulation using unsecure restart data
    inputs = self.restart_unsecure.inputs # all previous inputs
    R = senspatankar(multilayer=inputs["multilayer"],
                          medium=inputs["medium"],
                          name=name if name is not None else inputs["name"],
                          description=kwargs.get("description",inputs["description"]),
                          t=kwargs.get("t",inputs["t"]),
                          autotime=kwargs.get("autotime",inputs["autotime"]),
                          timescale=kwargs.get("timescale",inputs["timescale"]),
                          Cxprevious=inputs["Cxprevious"],
                          ntimes=kwargs.get("ntimes",inputs["ntimes"]),
                          RelTol=kwargs.get("RelTol",inputs["RelTol"]),
                          AbsTol=kwargs.get("AbsTol",inputs["AbsTol"]),
                          container=container)
    # Update numeric data in self whith those in R
    self.t = R.t
    self.C = R.C
    self.CF = R.CF
    self.fc = R.fc
    self.f = R.f
    self.x = R.x
    self.Cx = R.Cx
    self.tC = R.tC
    self.C0eq = R.C0eq
    self.timebase = R.timebase
    self.discrete = R.discrete
    self.interp_CF = R.interp_CF
    self.CFtarget = R.CFtarget
    self.interp_Cx = R.interp_Cx
    self.Cxtarget = R.Cxtarget
    # Update label, color, linestyle, linewidth for the new curve (-1: last in the container)
    # note if name already exists, the previous content is replaced
    self.comparison.update(-1, label=name, color=color, linestyle=linestyle, linewidth=linewidth)
    return self # for chaining
def resume(self, t=None, **kwargs)

Resume simulation for a new duration (with all parameters are unchanged)

For convenience user overrides are provided as: parameter = value with parameter = "name","description"…"RelTol","AbsTol" (see senspantankar) Use specifically: CF0 to assign a different concentration for the food Cx0 (Cprofile object) to assign a different concentration profile (not recommended) medium to set a different medium (food) in contact

Expand source code
def resume(self,t=None,**kwargs):
    """
    Resume simulation for a new duration (with all parameters are unchanged)

    For convenience user overrides are provided as:
        parameter = value
        with parameter = "name","description"..."RelTol","AbsTol" (see senspantankar)
    Use specifically:
        CF0 to assign a different concentration for the food
        Cx0 (Cprofile object) to assign a different concentration profile (not recommended)
        medium to set a different medium (food) in contact
    """

    # retrieve previous results
    previousCF = self.restart.CF # CF at at target
    previousCx = self.restart.Cprofile # corresponding profile
    previousmedium = self.restart.inputs["medium"].copy()
    previousmedium.CF0 = previousCF # we apply the concentration
    # CF override with CF=new value
    isCF0forced = "CF0" in kwargs
    newmedium = kwargs.get("medium",previousmedium)
    if isCF0forced:
        newCF0 = kwargs.get("CF0",previousCF)
        newmedium.CF0 = newCF0
    if t is None:
        ttarget = newmedium.get_param("contacttime",(10,"days"),acceptNone=False)
        t = 2*ttarget
    # Concentration profile override with Cx0=new profile
    newCx0 = kwargs.get("Cx0",previousCx)
    if not isinstance(newCx0,Cprofile):
        raise TypeError(f"Cx0 should be a Cprofile object not a {type(newCx0).__name__}")

    # extend the existing solution
    inputs = self.restart.inputs # all previous inputs
    newsol = senspatankar(multilayer=inputs["multilayer"],
                          medium=newmedium,
                          name=kwargs.get("name",inputs["name"]),
                          description=kwargs.get("description",inputs["description"]),
                          t=t,
                          autotime=kwargs.get("autotime",inputs["autotime"]),
                          timescale=kwargs.get("timescale",inputs["timescale"]),
                          Cxprevious=newCx0,
                          ntimes=kwargs.get("ntimes",inputs["ntimes"]),
                          RelTol=kwargs.get("RelTol",inputs["RelTol"]),
                          AbsTol=kwargs.get("AbsTol",inputs["AbsTol"]))
    return newsol
def savestate(self, multilayer, medium)

Saves senspantankar inputs for simulation chaining

Expand source code
def savestate(self,multilayer,medium):
    """Saves senspantankar inputs for simulation chaining"""
    self._lastmedium = medium
    self._lastmultilayer = multilayer
    self._isstatesaved = True
def update(self, **kwargs)

Update modifiable parameters of the SensPatankarResult object.

Parameters

  • name (str): New name for the object.
  • description (str): New description.
  • tscale (float or tuple): Time scale (can be tuple like (1, "day")).
  • tunit (str): Time unit.
  • lscale (float or tuple): Length scale (can be tuple like (1e-6, "µm")).
  • lunit (str): Length unit.
  • Cscale (float or tuple): Concentration scale (can be tuple like (1, "a.u.")).
  • Cunit (str): Concentration unit.
Expand source code
def update(self, **kwargs):
    """
    Update modifiable parameters of the SensPatankarResult object.
    Parameters:
        - name (str): New name for the object.
        - description (str): New description.
        - tscale (float or tuple): Time scale (can be tuple like (1, "day")).
        - tunit (str): Time unit.
        - lscale (float or tuple): Length scale (can be tuple like (1e-6, "µm")).
        - lunit (str): Length unit.
        - Cscale (float or tuple): Concentration scale (can be tuple like (1, "a.u.")).
        - Cunit (str): Concentration unit.
    """
    def checkunits(value):
        """Helper function to handle unit conversion for scale/unit tuples."""
        if isinstance(value, tuple) and len(value) == 2:
            scale, unit = check_units(value)
            scale, unit = np.array(scale, dtype=float), str(unit)  # Ensure correct types
            return scale.item(), unit  # Convert numpy array to float
        elif isinstance(value, (int, float, np.ndarray)):
            value = np.array(value, dtype=float)  # Ensure float
            return value.item(), None  # Return as float with no unit change
        else:
            raise ValueError(f"Invalid value for scale/unit: {value}")

    # Update `name` and `description` if provided
    if "name" in kwargs:
        self.name = str(kwargs["name"])
    if "description" in kwargs:
        self.description = str(kwargs["description"])
    # Update `_plotconfig` parameters
    for key in ["tscale", "tunit", "lscale", "lunit", "Cscale", "Cunit"]:
        if key in kwargs:
            value = kwargs[key]

            if key in ["tscale", "lscale", "Cscale"]:
                value, unit = checkunits(value)  # Process unit conversion
                self._plotconfig[key] = value
                if unit is not None:
                    self._plotconfig[key.replace("scale", "unit")] = unit  # Ensure unit consistency
            else:
                self._plotconfig[key] = str(value)  # Convert unit strings directly
    return self  # Return self for method chaining if needed
class foodlayer (**kwargs)

=============================================================================== SFPPy Module: Food Layer =============================================================================== foodlayer models food as a 0D layer in mass transfer simulations, serving as the primary class for defining the medium in contact with a packaging material.


Core Functionality

  • Models food as a zero-dimensional (0D) medium with:
  • A mass transfer resistance (h) at the interface.
  • A partitioning behavior (k) between food and packaging.
  • Contact time (contacttime) and temperature (contacttemperature).
  • Defines food geometry:
  • surfacearea: Contact area with the material (m²).
  • volume: Total volume of the food medium (m³).
  • Supports impervious (nofood) and periodic (setoff) conditions.

Key Properties

  • h: Mass transfer coefficient (m/s) defining resistance at the interface.
  • k: Partition coefficient describing substance solubility in food.
  • contacttime: Time duration of the packaging-food interaction.
  • contacttemperature: Temperature at the packaging interface (°C).
  • surfacearea: Contact surface area between packaging and food (m²).
  • volume: Volume of the food medium (m³).
  • density: Density of the food medium (kg/m³).
  • substance: Migrant (chemical) diffusing into food.
  • medium: Food medium in contact with packaging.
  • impervious: True if no mass transfer occurs (nofood class).
  • PBC: True if periodic boundary conditions apply (setoff class).

Methods

  • __rshift__(self, other): Propagates food properties to a packaging layer (food >> layer).
  • __matmul__(self, other): Equivalent to >>, enables food @ layer.
  • migration(self, material, **kwargs): Simulates migration into a packaging layer.
  • contact(self, material, **kwargs): Alias for migration.
  • update(self, **kwargs): Dynamically updates food properties.
  • get_param(self, key, default=None, acceptNone=True): Retrieves a parameter safely.
  • refresh(self): Ensures all properties are validated before simulation.
  • acknowledge(self, what, category): Tracks inherited properties.
  • copy(self, **kwargs): Creates a deep copy of the food object.

Integration with SFPPy Modules

  • Used as the left-side boundary in migration.py simulations.
  • Interacts with layer.py to propagate temperature and partitioning effects.
  • Interfaces with geometry.py for food-contacting packaging simulations.

Usage Example

from patankar.food import foodlayer
medium = foodlayer(name="ethanol", contacttemperature=(40, "degC"))

from patankar.layer import LDPE
packaging = LDPE(l=50e-6, D=1e-14)

# Propagate food properties to the packaging
medium >> packaging

# Simulate migration
from patankar.migration import senspatankar
solution = senspatankar(packaging, medium)
solution.plotCF()

Notes

  • The foodlayer class extends foodphysics and provides a physical representation of food in contact with packaging.
  • Subclasses include:
  • setoff: Periodic boundary conditions (stacked packaging).
  • nofood: Impervious boundary (no mass transfer).
  • realcontact, testcontact: Standardized food contact conditions.
  • The h parameter determines if the medium is well-mixed or diffusion-limited.

general constructor

Expand source code
class foodlayer(foodphysics):
    """
    ===============================================================================
    SFPPy Module: Food Layer
    ===============================================================================
    `foodlayer` models food as a **0D layer** in mass transfer simulations, serving
    as the primary class for defining the medium in contact with a packaging material.

    ------------------------------------------------------------------------------
    **Core Functionality**
    ------------------------------------------------------------------------------
    - Models food as a **zero-dimensional (0D) medium** with:
      - A **mass transfer resistance (`h`)** at the interface.
      - A **partitioning behavior (`k`)** between food and packaging.
      - **Contact time (`contacttime`) and temperature (`contacttemperature`)**.
    - Defines **food geometry**:
      - `surfacearea`: Contact area with the material (m²).
      - `volume`: Total volume of the food medium (m³).
    - Supports **impervious (`nofood`) and periodic (`setoff`) conditions**.

    ------------------------------------------------------------------------------
    **Key Properties**
    ------------------------------------------------------------------------------
    - `h`: Mass transfer coefficient (m/s) defining resistance at the interface.
    - `k`: Partition coefficient describing substance solubility in food.
    - `contacttime`: Time duration of the packaging-food interaction.
    - `contacttemperature`: Temperature at the packaging interface (°C).
    - `surfacearea`: Contact surface area between packaging and food (m²).
    - `volume`: Volume of the food medium (m³).
    - `density`: Density of the food medium (kg/m³).
    - `substance`: Migrant (chemical) diffusing into food.
    - `medium`: Food medium in contact with packaging.
    - `impervious`: `True` if no mass transfer occurs (`nofood` class).
    - `PBC`: `True` if periodic boundary conditions apply (`setoff` class).

    ------------------------------------------------------------------------------
    **Methods**
    ------------------------------------------------------------------------------
    - `__rshift__(self, other)`: Propagates food properties to a packaging layer (`food >> layer`).
    - `__matmul__(self, other)`: Equivalent to `>>`, enables `food @ layer`.
    - `migration(self, material, **kwargs)`: Simulates migration into a packaging layer.
    - `contact(self, material, **kwargs)`: Alias for `migration()`.
    - `update(self, **kwargs)`: Dynamically updates food properties.
    - `get_param(self, key, default=None, acceptNone=True)`: Retrieves a parameter safely.
    - `refresh(self)`: Ensures all properties are validated before simulation.
    - `acknowledge(self, what, category)`: Tracks inherited properties.
    - `copy(self, **kwargs)`: Creates a deep copy of the food object.

    ------------------------------------------------------------------------------
    **Integration with SFPPy Modules**
    ------------------------------------------------------------------------------
    - Used as the **left-side boundary** in `migration.py` simulations.
    - Interacts with `layer.py` to propagate temperature and partitioning effects.
    - Interfaces with `geometry.py` for food-contacting packaging simulations.

    ------------------------------------------------------------------------------
    **Usage Example**
    ------------------------------------------------------------------------------
    ```python
    from patankar.food import foodlayer
    medium = foodlayer(name="ethanol", contacttemperature=(40, "degC"))

    from patankar.layer import LDPE
    packaging = LDPE(l=50e-6, D=1e-14)

    # Propagate food properties to the packaging
    medium >> packaging

    # Simulate migration
    from patankar.migration import senspatankar
    solution = senspatankar(packaging, medium)
    solution.plotCF()
    ```

    ------------------------------------------------------------------------------
    **Notes**
    ------------------------------------------------------------------------------
    - The `foodlayer` class extends `foodphysics` and provides a physical
      representation of food in contact with packaging.
    - Subclasses include:
      - `setoff`: Periodic boundary conditions (stacked packaging).
      - `nofood`: Impervious boundary (no mass transfer).
      - `realcontact`, `testcontact`: Standardized food contact conditions.
    - The `h` parameter determines if the medium is **well-mixed** or **diffusion-limited**.

    """
    level = "root"
    description = "root food class"  # Remains as class attribute
    name = "generic food layer"
    # -----------------------------------------------------------------------------
    # Class attributes that can be overidden in instances.
    # Their default values are set in classes and overriden with similar
    # instance properties with @property.setter.
    # These values cannot be set during construction, but only after instantiation.
    # A common scale for polarity index for solvents is from 0 to 10:
    #     - 0-3: Non-polar solvents (e.g., hexane)
    #     - 4-6: Moderately polar solvents (e.g., acetone)
    #     - 7-10: Polar solvents (e.g., water)
    # -----------------------------------------------------------------------------
    # These properties are essential for model predictions, they cannot be customized
    # beyond the rules accepted by the model predictors (they are not metadata)
    # note: similar attributes exist for patanaker.layer objects (similar possible values)
    _physicalstate = "liquid"   # solid, liquid (default), gas, porous
    _chemicalclass = "other"    # polymer, other (default)
    _chemicalsubstance = None   # None (default), monomer for polymers
    _polarityindex = 0.0        # polarity index (roughly: 0=hexane, 10=water)
    # -----------------------------------------------------------------------------
    # Class attributes duplicated as instance parameters
    # -----------------------------------------------------------------------------
    volume,volumeUnits = check_units((1,"dm**3"))
    surfacearea,surfaceareaUnits = check_units((6,"dm**2"))
    density,densityUnits = check_units((1000,"kg/m**3"))
    CF0,CF0units = check_units((0,NoUnits))  # initial concentration (arbitrary units)
    contacttime, contacttime_units = check_units((10,"days"))
    contactemperature,contactemperatureUnits = check_units((40,"degC"),ExpectedUnits="degC") # temperature in °C
    _substance = None # substance container / similar construction in pantankar.layer = migrant
    _k0model = None
    # -----------------------------------------------------------------------------
    # Getter methods for class/instance properties: same definitions as in patankar.layer (mandatory)
    # medium properties
    # -----------------------------------------------------------------------------
    # PHASE PROPERTIES  (attention chemicalsubstance=F substance, substance=i substance)
    @property
    def physicalstate(self): return self._physicalstate
    @property
    def chemicalclass(self): return self._chemicalclass
    @property
    def chemicalsubstance(self): return self._chemicalsubstance
    @property
    def simulant(self): return self._chemicalsubstance # simulant is an alias of chemicalsubstance
    @property
    def polarityindex(self): return self._polarityindex
    @property
    def ispolymer(self): return self.physicalstate == "polymer"
    @property
    def issolid(self): return self.solid == "solid"
    # SUBSTANCE/SOLUTE/MIGRANT properties  (attention chemicalsubstance=F substance, substance=i substance)
    @property
    def substance(self): return self._substance # substance can be ambiguous
    @property
    def migrant(self): return self.substance    # synonym
    @property
    def solute(self): return self.substance     # synonym

    # -----------------------------------------------------------------------------
    # Setter methods for class/instance properties: same definitions as in patankar.layer (mandatory)
    # -----------------------------------------------------------------------------
    # PHASE PROPERTIES  (attention chemicalsubstance=F substance, substance=i substance)
    @physicalstate.setter
    def physicalstate(self,value):
        if value not in ("solid","liquid","gas","supercritical"):
            raise ValueError(f"physicalstate must be solid/liduid/gas/supercritical and not {value}")
        self._physicalstate = value
    @chemicalclass.setter
    def chemicalclass(self,value):
        if value not in ("polymer","other"):
            raise ValueError(f"chemicalclass must be polymer/oher and not {value}")
        self._chemicalclass= value
    @chemicalsubstance.setter
    def chemicalsubstance(self,value):
        if not isinstance(value,str):
            raise ValueError("chemicalsubtance must be str not a {type(value).__name__}")
        self._chemicalsubstance= value
    @simulant.setter
    def simulant(self,value):
        self.chemicalsubstance = value # simulant is an alias of chemicalcalsubstance
    @polarityindex.setter
    def polarityindex(self,value):
        if not isinstance(value,(float,int)):
            raise ValueError("polarity index must be float not a {type(value).__name__}")
        # rescaled to match predictions - standard scale [0,10.2] - predicted scale [0,7.12]
        return self._polarityindex * migrant("water").polarityindex/10.2
    # SUBSTANCE/SOLUTE/MIGRANT properties  (attention chemicalsubstance=F substance, substance=i substance)
    @substance.setter
    def substance(self,value):
        if isinstance(value,str):
            value = migrant(value)
        if not isinstance(value,migrant):
            raise TypeError(f"substance/migrant/solute must be a migrant not a {type(value).__name__}")
        self._substance = value
    @migrant.setter
    def migrant(self,value):
        self.substance = value
    @solute.setter
    def solute(self,value):
        self.substance = value
    # -----------------------------------------------------------------------------
    # Henry-like coefficient k and its alias k0 (internal use)
    # -----------------------------------------------------------------------------
    #   - k is the name of the Henry-like property for food (as set and seen by the user)
    #   - k0 is the property operated by migration
    #   - k0 = k except if kmodel (lambda function) does not returns None
    #   - kmodel returns None if _substance is not set (proper migrant)
    #   - kmodel = None will override any existing kmodel
    #   - kmodel must be intialized to "default" to refresh its definition with self
    # note: The implementation is almost symmetric with kmodel in patankar.layer.
    # The main difference are:
    #    - food classes are instantiated by foodphysics
    #    - k is used to store the value of k0 (not _k or _k0)
    # -----------------------------------------------------------------------------
    # layer substance (of class migrant or None)
    # k0 and k0units (k and kunits are user inputs)
    @property
    def k0(self):
        ktmp = None
        if self.kmodel == "default": # default behavior
            ktmp = self._compute_kmodel()
        elif callable(self.kmodel): # user override (not the default function)
            ktmp = self.kmodel()
        if ktmp:
            return np.full_like(self.k, ktmp,dtype=np.float64)
        return self.k
    @k0.setter
    def k0(self,value):
        if not isinstance(value,(int,float,np.ndarray)):
            TypeError("k0 must be int, float or np.ndarray")
        if isinstance(self.k,int): self.k = float(self.k)
        self.k = np.full_like(self.k,value,dtype=np.float64)
    @property
    def kmodel(self):
        return self._kmodel
    @kmodel.setter
    def kmodel(self,value):
        if value is None or callable(value):
            self._kmodel = value
        else:
            raise ValueError("kmodel must be None or a callable function")
    @property
    def _compute_kmodel(self):
        """Return a callable function that evaluates k with updated parameters."""
        if not isinstance(self._substance,migrant) or self._substance.keval() is None or self.chemicalsubstance is None:
            return lambda **kwargs: None  # Return a function that always returns None
        template = self._substance.ktemplate.copy()
        # add solute (i) properties: Pi and Vi have been set by loadpubchem already
        template.update(ispolymer = False)
        def func(**kwargs):
            if self.chemicalsubstance:
                simulant = migrant(self.chemicalsubstance)
                template.update(Pk = simulant.polarityindex,
                                Vk = simulant.molarvolumeMiller)
                k = self._substance.k.evaluate(**dict(template, **kwargs))
                return k
            else:
                self.k
        return func # we return a callable function not a value

Ancestors

  • patankar.food.foodphysics

Subclasses

  • patankar.food.foodproperty

Class variables

var CF0
var CF0units
var contactemperature
var contactemperatureUnits
var contacttime
var contacttime_units
var density
var densityUnits
var description
var level
var name
var surfacearea
var surfaceareaUnits
var volume
var volumeUnits

Instance variables

var chemicalclass
Expand source code
@property
def chemicalclass(self): return self._chemicalclass
var chemicalsubstance
Expand source code
@property
def chemicalsubstance(self): return self._chemicalsubstance
var ispolymer
Expand source code
@property
def ispolymer(self): return self.physicalstate == "polymer"
var issolid
Expand source code
@property
def issolid(self): return self.solid == "solid"
var k0
Expand source code
@property
def k0(self):
    ktmp = None
    if self.kmodel == "default": # default behavior
        ktmp = self._compute_kmodel()
    elif callable(self.kmodel): # user override (not the default function)
        ktmp = self.kmodel()
    if ktmp:
        return np.full_like(self.k, ktmp,dtype=np.float64)
    return self.k
var kmodel
Expand source code
@property
def kmodel(self):
    return self._kmodel
var migrant
Expand source code
@property
def migrant(self): return self.substance    # synonym
var physicalstate
Expand source code
@property
def physicalstate(self): return self._physicalstate
var polarityindex
Expand source code
@property
def polarityindex(self): return self._polarityindex
var simulant
Expand source code
@property
def simulant(self): return self._chemicalsubstance # simulant is an alias of chemicalsubstance
var solute
Expand source code
@property
def solute(self): return self.substance     # synonym
var substance
Expand source code
@property
def substance(self): return self._substance # substance can be ambiguous
class foodphysics (**kwargs)

=============================================================================== SFPPy Module: Food Physics (Base Class) =============================================================================== foodphysics serves as the base class for all food-related objects in mass transfer simulations. It defines key parameters for food interaction with packaging materials and implements dynamic property propagation for simulation models.


Core Functionality

  • Defines mass transfer properties:
  • h: Mass transfer coefficient (m/s)
  • k: Partition coefficient (dimensionless)
  • Implements contact conditions:
  • contacttime: Duration of food-packaging contact
  • contacttemperature: Temperature of the contact interface
  • Supports inheritance and property propagation to layers.
  • Provides physical state representation (solid, liquid, gas).
  • Allows customization of mass transfer coefficients via kmodel.

Key Properties

  • h: Mass transfer coefficient (m/s) defining resistance at the interface.
  • k: Henry-like partition coefficient between the food and the material.
  • contacttime: Time duration of the packaging-food interaction.
  • contacttemperature: Temperature at the packaging interface (°C).
  • surfacearea: Contact surface area between packaging and food (m²).
  • volume: Volume of the food medium (m³).
  • density: Density of the food medium (kg/m³).
  • substance: The migrating substance (e.g., a chemical compound).
  • medium: The food medium in contact with packaging.
  • kmodel: Custom partitioning model (can be overridden by the user).

Methods

  • __rshift__(self, other): Propagates food properties to a layer (food >> layer).
  • __matmul__(self, other): Equivalent to >>, enables food @ layer.
  • migration(self, material, **kwargs): Simulates migration into a packaging layer.
  • contact(self, material, **kwargs): Alias for migration.
  • update(self, **kwargs): Dynamically updates food properties.
  • get_param(self, key, default=None, acceptNone=True): Retrieves a parameter safely.
  • refresh(self): Ensures all properties are validated before simulation.
  • acknowledge(self, what, category): Tracks inherited properties.
  • copy(self, **kwargs): Creates a deep copy of the food object.

Integration with SFPPy Modules

  • Works with migration.py to define the left-side boundary condition.
  • Interfaces with layer.py to apply contact temperature propagation.
  • Connects with geometry.py for food-contacting packaging surfaces.

Usage Example

from patankar.food import foodphysics
from patankar.layer import layer

medium = foodphysics(contacttemperature=(40, "degC"), h=(1e-6, "m/s"))
packaging_layer = layer(D=1e-14, l=50e-6)

# Propagate food properties to the layer
medium >> packaging_layer

# Simulate migration
from patankar.migration import senspatankar
solution = senspatankar(packaging_layer, medium)
solution.plotCF()

Notes

  • The foodphysics class is the parent of foodlayer, nofood, setoff, realcontact, and testcontact.
  • The PBC property identifies periodic boundary conditions (used in setoff).
  • This class provides dynamic inheritance for mass transfer properties.

general constructor

Expand source code
class foodphysics:
    """
    ===============================================================================
    SFPPy Module: Food Physics (Base Class)
    ===============================================================================
    `foodphysics` serves as the **base class** for all food-related objects in mass
    transfer simulations. It defines key parameters for food interaction with packaging
    materials and implements dynamic property propagation for simulation models.

    ------------------------------------------------------------------------------
    **Core Functionality**
    ------------------------------------------------------------------------------
    - Defines **mass transfer properties**:
      - `h`: Mass transfer coefficient (m/s)
      - `k`: Partition coefficient (dimensionless)
    - Implements **contact conditions**:
      - `contacttime`: Duration of food-packaging contact
      - `contacttemperature`: Temperature of the contact interface
    - Supports **inheritance and property propagation** to layers.
    - Provides **physical state representation** (`solid`, `liquid`, `gas`).
    - Allows **customization of mass transfer coefficients** via `kmodel`.

    ------------------------------------------------------------------------------
    **Key Properties**
    ------------------------------------------------------------------------------
    - `h`: Mass transfer coefficient (m/s) defining resistance at the interface.
    - `k`: Henry-like partition coefficient between the food and the material.
    - `contacttime`: Time duration of the packaging-food interaction.
    - `contacttemperature`: Temperature at the packaging interface (°C).
    - `surfacearea`: Contact surface area between packaging and food (m²).
    - `volume`: Volume of the food medium (m³).
    - `density`: Density of the food medium (kg/m³).
    - `substance`: The migrating substance (e.g., a chemical compound).
    - `medium`: The food medium in contact with packaging.
    - `kmodel`: Custom partitioning model (can be overridden by the user).

    ------------------------------------------------------------------------------
    **Methods**
    ------------------------------------------------------------------------------
    - `__rshift__(self, other)`: Propagates food properties to a layer (`food >> layer`).
    - `__matmul__(self, other)`: Equivalent to `>>`, enables `food @ layer`.
    - `migration(self, material, **kwargs)`: Simulates migration into a packaging layer.
    - `contact(self, material, **kwargs)`: Alias for `migration()`.
    - `update(self, **kwargs)`: Dynamically updates food properties.
    - `get_param(self, key, default=None, acceptNone=True)`: Retrieves a parameter safely.
    - `refresh(self)`: Ensures all properties are validated before simulation.
    - `acknowledge(self, what, category)`: Tracks inherited properties.
    - `copy(self, **kwargs)`: Creates a deep copy of the food object.

    ------------------------------------------------------------------------------
    **Integration with SFPPy Modules**
    ------------------------------------------------------------------------------
    - Works with `migration.py` to define the **left-side boundary condition**.
    - Interfaces with `layer.py` to apply contact temperature propagation.
    - Connects with `geometry.py` for food-contacting packaging surfaces.

    ------------------------------------------------------------------------------
    **Usage Example**
    ------------------------------------------------------------------------------
    ```python
    from patankar.food import foodphysics
    from patankar.layer import layer

    medium = foodphysics(contacttemperature=(40, "degC"), h=(1e-6, "m/s"))
    packaging_layer = layer(D=1e-14, l=50e-6)

    # Propagate food properties to the layer
    medium >> packaging_layer

    # Simulate migration
    from patankar.migration import senspatankar
    solution = senspatankar(packaging_layer, medium)
    solution.plotCF()
    ```

    ------------------------------------------------------------------------------
    **Notes**
    ------------------------------------------------------------------------------
    - The `foodphysics` class is the parent of `foodlayer`, `nofood`, `setoff`,
      `realcontact`, and `testcontact`.
    - The `PBC` property identifies periodic boundary conditions (used in `setoff`).
    - This class provides **dynamic inheritance** for mass transfer properties.

    """

    # General descriptors
    description = "Root physics class used to implement food and mass transfer physics"  # Remains as class attribute
    name = "food physics"
    level = "base"

    # Low-level prediction properties (F=contact medium, i=solute/migrant)
    # these @properties are defined by foodlayer, they should be duplicated
    _lowLevelPredictionPropertyList = [
        "chemicalsubstance","simulant","polarityindex","ispolymer","issolid", # F: common with patankar.layer
        "physicalstate","chemicalclass", # phase F properties
        "substance","migrant","solute", # i properties with synonyms substance=migrant=solute
        # users use "k", but internally we use k0, keep _kmodel in the instance
        "k0","k0unit","kmodel","_compute_kmodel" # Henry-like coefficients returned as properties with possible user override with medium.k0model=None or a function
        ]

    # ------------------------------------------------------
    # Transfer rules for food1 >> food2 and food1 >> result
    # ------------------------------------------------------

    # Mapping of properties to their respective categories
    _list_categories = {
        "contacttemperature": "contact",
        "contacttime": "contact",
        "surfacearea": "geometry",
        "volume": "geometry",
        "substance": "substance",
        "medium": "medium"
    }

    # Rules for property transfer wtih >> or @ based on object type
    # ["property name"]["name of the destination class"][attr]
    #   - if onlyifinherited, only inherited values are transferred
    #   - if checkNmPy, the value will be transferred as a np.ndarray
    #   - name is the name of the property in the destination class (use "" to keep the same name)
    #   - prototype is the class itself (available only after instantiation, keep None here)
    _transferable_properties = {
        "contacttemperature": {
            "foodphysics": {
                "onlyifinherited": True,
                "checkNumPy": False,
                "as": "",
                "prototype": None,
            },
            "layer": {
                "onlyifinherited": False,
                "checkNumPy": True,
                "as": "T",
                "prototype": None
            }
        },
        "contacttime": {
            "foodphysics": {
                "onlyifinherited": True,
                "checkNumPy": True,
                "as": "",
                "prototype": None,
            },
            "SensPatankarResult": {
                "onlyifinherited": False,
                "checkNumPy": True,
                "as": "t",
                "prototype": None
            }
        },
        "surfacearea": {
            "foodphysics": {
                "onlyifinherited": False,
                "checkNumPy": False,
                "as": "surfacearea",
                "prototype": None
            }
        },
        "volume": {
            "foodphysics": {
                "onlyifinherited": False,
                "checkNumPy": True,
                "as": "",
                "prototype": None
            }
        },
        "substance": {
            "foodlayer": {
                "onlyifinherited": False,
                "checkNumPy": False,
                "as": "",
                "prototype": None,
            },
            "layer": {
                "onlyifinherited": False,
                "checkNumPy": False,
                "as": "",
                "prototype": None
            }
        },
        "medium": {
            "layer": {
                "onlyifinherited": False,
                "checkNumPy": False,
                "as": "",
                "prototype": None
            }
        },
    }


    def __init__(self, **kwargs):
        """general constructor"""

        # local import
        from patankar.migration import SensPatankarResult

        # numeric validator
        def numvalidator(key,value):
            if key in parametersWithUnits:          # the parameter is a physical quantity
                if isinstance(value,tuple):         # the supplied value as unit
                    value,_ = check_units(value)    # we convert to SI, we drop the units
                if not isinstance(value,np.ndarray):
                    value = np.array([value])       # we force NumPy class
            return value

        # Iterate through the MRO (excluding foodphysics and object)
        for cls in reversed(self.__class__.__mro__):
            if cls in (foodphysics, object):
                continue
            # For each attribute defined at the class level,
            # if it is not 'description', not callable, and not a dunder, set it as an instance attribute.
            for key, value in cls.__dict__.items(): # we loop on class attributes
                if key in ("description","level") or key in self._lowLevelPredictionPropertyList or key.startswith("__") or key.startswith("_") or callable(value):
                    continue
                if key not in kwargs:
                    setattr(self, key, numvalidator(key,value))
        # Now update/override with any keyword arguments provided at instantiation.
        for key, value in kwargs.items():
            value = numvalidator(key,value)
            if key not in paramaterNamesWithUnits: # we protect the values of units (they are SI, they cannot be changed)
                setattr(self, key, value)
        # we initialize the acknowlegment process for future property propagation
        self._hasbeeninherited = {}
        # we initialize _kmodel if _compute_kmodel exists
        if hasattr(self,"_compute_kmodel"):
            self._kmodel = "default" # do not initialize at self._compute_kmodel (default forces refresh)
        # we initialize the _simstate storing the last simulation result available
        self._simstate = None # simulation results
        self._inpstate = None # their inputs
        # For cooperative multiple inheritance, call the next __init__ if it exists.
        super().__init__()
        # Define actual class references to avoid circular dependency issues
        if self.__class__._transferable_properties["contacttemperature"]["foodphysics"]["prototype"] is None:
            self.__class__._transferable_properties["contacttemperature"]["foodphysics"]["prototype"] = foodphysics
            self.__class__._transferable_properties["contacttemperature"]["layer"]["prototype"] = layer
            self.__class__._transferable_properties["contacttime"]["foodphysics"]["prototype"] = foodphysics
            self.__class__._transferable_properties["contacttime"]["SensPatankarResult"]["prototype"] = SensPatankarResult
            self.__class__._transferable_properties["surfacearea"]["foodphysics"]["prototype"] = foodphysics
            self.__class__._transferable_properties["volume"]["foodphysics"]["prototype"] = foodphysics
            self.__class__._transferable_properties["substance"]["foodlayer"]["prototype"] = migrant
            self.__class__._transferable_properties["substance"]["layer"]["prototype"] = layer
            self.__class__._transferable_properties["medium"]["layer"]["prototype"] = layer

    # ------- [properties to access/modify simstate] --------
    @property
    def lastinput(self):
        """Getter for last layer input."""
        return self._inpstate

    @lastinput.setter
    def lastinput(self, value):
        """Setter for last layer input."""
        self._inpstate = value

    @property
    def lastsimulation(self):
        """Getter for last simulation results."""
        return self._simstate

    @lastsimulation.setter
    def lastsimulation(self, value):
        """Setter for last simulation results."""
        self._simstate = value

    @property
    def hassimulation(self):
        """Returns True if a simulation exists"""
        return self.lastsimulation is not None


    # ------- [inheritance registration mechanism] --------
    def acknowledge(self, what=None, category=None):
        """
        Register inherited properties under a given category.

        Parameters:
        -----------
        what : str or list of str or a set
            The properties or attributes that have been inherited.
        category : str
            The category under which the properties are grouped.

        Example:
        --------
        >>> b = B()
        >>> b.acknowledge(what="volume", category="geometry")
        >>> b.acknowledge(what=["surfacearea", "diameter"], category="geometry")
        >>> print(b._hasbeeninherited)
        {'geometry': {'volume', 'surfacearea', 'diameter'}}
        """
        if category is None or what is None:
            raise ValueError("Both 'what' and 'category' must be provided.")
        if isinstance(what, str):
            what = {what}  # Convert string to a set
        elif isinstance(what, list):
            what = set(what)  # Convert list to a set for uniqueness
        elif not isinstance(what,set):
            raise TypeError("'what' must be a string, a list, or a set of strings.")
        if category not in self._hasbeeninherited:
            self._hasbeeninherited[category] = set()
        self._hasbeeninherited[category].update(what)


    def refresh(self):
        """refresh all physcal paramaters after instantiation"""
        for key, value in self.__dict__.items():    # we loop on instance attributes
            if key in parametersWithUnits:          # the parameter is a physical quantity
                if isinstance(value,tuple):         # the supplied value as unit
                    value = check_units(value)[0]   # we convert to SI, we drop the units
                    setattr(self,key,value)
                if not isinstance(value,np.ndarray):
                    value = np.array([value])      # we force NumPy class
                    setattr(self,key,value)

    def update(self, **kwargs):
        """
        Update modifiable parameters of the foodphysics object.

        Modifiable Parameters:
            - name (str): New name for the object.
            - description (str): New description.
            - volume (float or tuple): Volume (can be tuple like (1, "L")).
            - surfacearea (float or tuple): Surface area (can be tuple like (1, "cm^2")).
            - density (float or tuple): Density (can be tuple like (1000, "kg/m^3")).
            - CF0 (float or tuple): Initial concentration in the food.
            - contacttime (float or tuple): Contact time (can be tuple like (1, "h")).
            - contacttemperature (float or tuple): Temperature (can be tuple like (25, "degC")).
            - h (float or tuple): Mass transfer coefficient (can be tuple like (1e-6,"m/s")).
            - k (float or tuple): Henry-like coefficient for the food (can be tuple like (1,"a.u.")).

        """
        if not kwargs:  # shortcut
            return self # for chaining
        def checkunits(value):
            """Helper function to convert physical quantities to SI."""
            if isinstance(value, tuple) and len(value) == 2:
                scale = check_units(value)[0]  # Convert to SI, drop unit
                return np.array([scale], dtype=float)  # Ensure NumPy array
            elif isinstance(value, (int, float, np.ndarray)):
                return np.array([value], dtype=float)  # Ensure NumPy array
            else:
                raise ValueError(f"Invalid value for physical quantity: {value}")
        # Update `name` and `description` if provided
        if "name" in kwargs:
            self.name = str(kwargs["name"])
        if "description" in kwargs:
            self.description = str(kwargs["description"])
        # Update physical properties
        for key in parametersWithUnits.keys():
            if key in kwargs:
                value = kwargs[key]
                setattr(self, key, checkunits(value))  # Ensure NumPy array in SI
        # Update medium, migrant (they accept aliases)
        lex = {
            "substance": ("substance", "migrant", "chemical", "solute"),
            "medium": ("medium", "simulant", "food", "contact"),
        }
        used_aliases = {}
        def get_value(canonical_key):
            """Find the correct alias in kwargs and return its value, or None if not found."""
            found_key = None
            for alias in lex.get(canonical_key, ()):  # Get aliases, default to empty tuple
                if alias in kwargs:
                    if alias in used_aliases:
                        raise ValueError(f"Alias '{alias}' is used for multiple canonical keys!")
                    found_key = alias
                    used_aliases[alias] = canonical_key
                    break  # Stop at the first match
            return kwargs.get(found_key, None)  # Return value if found, else None
        # Assign values only if found in kwargs
        new_substance = get_value("substance")
        new_medium = get_value("medium")
        if new_substance is not None: self.substance = new_substance
        if new_medium is not None:self.medium = new_medium
        # return
        return self  # Return self for method chaining if needed

    def get_param(self, key, default=None, acceptNone=True):
        """Retrieve instance attribute with a default fallback if enabled."""
        paramdefaultvalue = 1
        if isinstance(self,(setoff,nofood)):
            if key in parametersWithUnits_andfallback:
                value =  self.__dict__.get(key, paramdefaultvalue) if default is None else self.__dict__.get(key, default)
                if isinstance(value,np.ndarray):
                    value = value.item()
                if value is None and not acceptNone:
                    value = paramdefaultvalue if default is None else default
                return np.array([value])
            if key in paramaterNamesWithUnits:
                return self.__dict__.get(key, parametersWithUnits[key]) if default is None else self.__dict__.get(key, default)
        if key in parametersWithUnits:
            if hasattr(self, key):
                return getattr(self,key)
            else:
                raise KeyError(
                    f"Missing property: '{key}' in instance of class '{self.__class__.__name__}'.\n"
                    f"To define it, use one of the following methods:\n"
                    f"  - Direct assignment:   object.{key} = value\n"
                    f"  - Using update method: object.update({key}=value)\n"
                    f"Note: The value can also be provided as a tuple (value, 'unit')."
                )
        elif key in paramaterNamesWithUnits:
            return self.__dict__.get(key, paramaterNamesWithUnits[key]) if default is None else self.__dict__.get(key, default)
        raise KeyError(f'Use getattr("{key}") to retrieve the value of {key}')

    def __repr__(self):
        """Formatted string representation of the FOODlayer object."""
        # Refresh all definitions
        self.refresh()
        # Header with name and description
        repr_str = f'Food object "{self.name}" ({self.description}) with properties:\n'
        # Helper function to extract a numerical value safely
        def format_value(value):
            """Ensure the value is a float or a single-item NumPy array."""
            if isinstance(value, np.ndarray):
                return value.item() if value.size == 1 else value[0]  # Ensure scalar representation
            elif value is None:
                return value
            return float(value)
        # Collect defined properties and their formatted values
        properties = []
        excluded = ("k") if self.haskmodel else ("k0")
        for key, unit in parametersWithUnits.items():
            if hasattr(self, key) and key not in excluded:  # Include only defined parameters
                value = format_value(getattr(self, key))
                unit_str = self.get_param(f"{key}Units", unit)  # Retrieve unit safely
                if value is not None:
                    properties.append((key, f"{value:0.8g}", unit_str))

        # Sort properties alphabetically
        properties.sort(key=lambda x: x[0])

        # Determine max width for right-aligned names
        max_key_length = max(len(key) for key, _, _ in properties) if properties else 0
        # Construct formatted property list
        for key, value, unit_str in properties:
            repr_str += f"{key.rjust(max_key_length)}: {value} [{unit_str}]\n"
            if key == "k0":
                extra_info = f"{self._substance.k.__name__}(<{self.chemicalsubstance}>,{self._substance})"
                repr_str += f"{' ' * (max_key_length)}= {extra_info}\n"
        print(repr_str.strip())  # Print formatted output
        return str(self)  # Simplified representation for repr()



    def __str__(self):
        """Formatted string representation of the property"""
        simstr = ' [simulated]' if self.hassimulation else ""
        return f"<{self.__class__.__name__}: {self.name}>{simstr}"

    def copy(self,**kwargs):
        """Creates a deep copy of the current food instance."""
        return duplicate(self).update(**kwargs)


    @property
    def PBC(self):
        """
        Returns True if h is not defined or None
        This property is used to identified periodic boundary condition also called setoff mass transfer.

        """
        if not hasattr(self,"h"):
            return False # None
        htmp = getattr(self,"h")
        if isinstance(htmp,np.ndarray):
            htmp = htmp.item()
        return htmp is None

    @property
    def hassubstance(self):
        """Returns True if substance is defined (class migrant)"""
        if not hasattr(self, "_substance"):
            return False
        return isinstance(self._substance,migrant)



    # --------------------------------------------------------------------
    # For convenience, several operators have been overloaded
    #   medium >> packaging      # sets the volume and the surfacearea
    #   medium >> material       # propgates the contact temperature from the medium to the material
    #   sol = medium << material # simulate migration from the material to the medium
    # --------------------------------------------------------------------

    # method: medium._to(material) and its associated operator >>
    def _to(self, other = None):
        """
        Transfers inherited properties to another object based on predefined rules.

        Parameters:
        -----------
        other : object
            The recipient object that will receive the transferred properties.

        Notes:
        ------
        - Only properties listed in `_transferable_properties` are transferred.
        - A property can only be transferred if `other` matches the expected class.
        - The property may have a different name in `other` as defined in `as`.
        - If `onlyifinherited` is True, the property must have been inherited by `self`.
        - If `checkNumPy` is True, ensures NumPy array compatibility.
        - Updates `other`'s `_hasbeeninherited` tracking.
        """
        for prop, classes in self._transferable_properties.items():
            if prop not in self._list_categories:
                continue  # Skip properties not categorized

            category = self._list_categories[prop]

            for class_name, rules in classes.items():

                if not isinstance(other, rules["prototype"]):
                    continue  # Skip if other is not an instance of the expected prototype class

                if rules["onlyifinherited"] and category not in self._hasbeeninherited:
                    continue  # Skip if property must be inherited but is not

                if rules["onlyifinherited"] and prop not in self._hasbeeninherited[category]:
                    continue  # Skip if the specific property has not been inherited

                if not hasattr(self, prop):
                    continue  # Skip if the property does not exist on self

                # Determine the target attribute name in other
                target_attr = rules["as"] if rules["as"] else prop

                # Retrieve the property value
                value = getattr(self, prop)

                # Handle NumPy array check
                if rules["checkNumPy"] and hasattr(other, target_attr):
                    existing_value = getattr(other, target_attr)
                    if isinstance(existing_value, np.ndarray):
                        value = np.full(existing_value.shape, value)

                # Assign the value to other
                setattr(other, target_attr, value)

                # Register the transfer in other’s inheritance tracking
                other.acknowledge(what=target_attr, category=category)

                # to chain >>
                return other

    def __rshift__(self, other):
        """Overloads >> to propagate to other."""
        # inherit substance/migrant from other if self.migrant is None
        if isinstance(other,(layer,foodlayer)):
            if isinstance(self,foodlayer):
                if self.substance is None and other.substance is not None:
                    self.substance = other.substance
        return self._to(other) # propagates

    def __matmul__(self, other):
        """Overload @: equivalent to >> if other is a layer."""
        if not isinstance(other, layer):
            raise TypeError(f"Right operand must be a layer not a {type(other).__name__}")
        return self._to(other)

    # migration method
    def migration(self,material,**kwargs):
        """interface to simulation engine: senspantankar"""
        from patankar.migration import senspatankar
        self._to(material) # propagate contact conditions first
        sim = senspatankar(material,self,**kwargs)
        self.lastsimulation = sim # store the last simulation result in medium
        self.lastinput = material # store the last input (material)
        sim.savestate(material,self) # store store the inputs in sim for chaining
        return sim

    def contact(self,material,**kwargs):
        """alias to migration method"""
        return self.migration(self,material,**kwargs)

    @property
    def haskmodel(self):
        """Returns True if a kmodel has been defined"""
        if hasattr(self, "_compute_kmodel"):
            if self._compute_kmodel() is not None:
                return True
            elif callable(self.kmodel):
                return self.kmodel() is not None
        return False

Subclasses

  • patankar.food.chemicalaffinity
  • patankar.food.foodlayer
  • patankar.food.nofood
  • patankar.food.realcontact
  • patankar.food.setoff
  • patankar.food.testcontact
  • patankar.food.texture

Class variables

var description
var level
var name

Instance variables

var PBC

Returns True if h is not defined or None This property is used to identified periodic boundary condition also called setoff mass transfer.

Expand source code
@property
def PBC(self):
    """
    Returns True if h is not defined or None
    This property is used to identified periodic boundary condition also called setoff mass transfer.

    """
    if not hasattr(self,"h"):
        return False # None
    htmp = getattr(self,"h")
    if isinstance(htmp,np.ndarray):
        htmp = htmp.item()
    return htmp is None
var haskmodel

Returns True if a kmodel has been defined

Expand source code
@property
def haskmodel(self):
    """Returns True if a kmodel has been defined"""
    if hasattr(self, "_compute_kmodel"):
        if self._compute_kmodel() is not None:
            return True
        elif callable(self.kmodel):
            return self.kmodel() is not None
    return False
var hassimulation

Returns True if a simulation exists

Expand source code
@property
def hassimulation(self):
    """Returns True if a simulation exists"""
    return self.lastsimulation is not None
var hassubstance

Returns True if substance is defined (class migrant)

Expand source code
@property
def hassubstance(self):
    """Returns True if substance is defined (class migrant)"""
    if not hasattr(self, "_substance"):
        return False
    return isinstance(self._substance,migrant)
var lastinput

Getter for last layer input.

Expand source code
@property
def lastinput(self):
    """Getter for last layer input."""
    return self._inpstate
var lastsimulation

Getter for last simulation results.

Expand source code
@property
def lastsimulation(self):
    """Getter for last simulation results."""
    return self._simstate

Methods

def acknowledge(self, what=None, category=None)

Register inherited properties under a given category.

Parameters:

what : str or list of str or a set The properties or attributes that have been inherited. category : str The category under which the properties are grouped.

Example:

>>> b = B()
>>> b.acknowledge(what="volume", category="geometry")
>>> b.acknowledge(what=["surfacearea", "diameter"], category="geometry")
>>> print(b._hasbeeninherited)
{'geometry': {'volume', 'surfacearea', 'diameter'}}
Expand source code
def acknowledge(self, what=None, category=None):
    """
    Register inherited properties under a given category.

    Parameters:
    -----------
    what : str or list of str or a set
        The properties or attributes that have been inherited.
    category : str
        The category under which the properties are grouped.

    Example:
    --------
    >>> b = B()
    >>> b.acknowledge(what="volume", category="geometry")
    >>> b.acknowledge(what=["surfacearea", "diameter"], category="geometry")
    >>> print(b._hasbeeninherited)
    {'geometry': {'volume', 'surfacearea', 'diameter'}}
    """
    if category is None or what is None:
        raise ValueError("Both 'what' and 'category' must be provided.")
    if isinstance(what, str):
        what = {what}  # Convert string to a set
    elif isinstance(what, list):
        what = set(what)  # Convert list to a set for uniqueness
    elif not isinstance(what,set):
        raise TypeError("'what' must be a string, a list, or a set of strings.")
    if category not in self._hasbeeninherited:
        self._hasbeeninherited[category] = set()
    self._hasbeeninherited[category].update(what)
def contact(self, material, **kwargs)

alias to migration method

Expand source code
def contact(self,material,**kwargs):
    """alias to migration method"""
    return self.migration(self,material,**kwargs)
def copy(self, **kwargs)

Creates a deep copy of the current food instance.

Expand source code
def copy(self,**kwargs):
    """Creates a deep copy of the current food instance."""
    return duplicate(self).update(**kwargs)
def get_param(self, key, default=None, acceptNone=True)

Retrieve instance attribute with a default fallback if enabled.

Expand source code
def get_param(self, key, default=None, acceptNone=True):
    """Retrieve instance attribute with a default fallback if enabled."""
    paramdefaultvalue = 1
    if isinstance(self,(setoff,nofood)):
        if key in parametersWithUnits_andfallback:
            value =  self.__dict__.get(key, paramdefaultvalue) if default is None else self.__dict__.get(key, default)
            if isinstance(value,np.ndarray):
                value = value.item()
            if value is None and not acceptNone:
                value = paramdefaultvalue if default is None else default
            return np.array([value])
        if key in paramaterNamesWithUnits:
            return self.__dict__.get(key, parametersWithUnits[key]) if default is None else self.__dict__.get(key, default)
    if key in parametersWithUnits:
        if hasattr(self, key):
            return getattr(self,key)
        else:
            raise KeyError(
                f"Missing property: '{key}' in instance of class '{self.__class__.__name__}'.\n"
                f"To define it, use one of the following methods:\n"
                f"  - Direct assignment:   object.{key} = value\n"
                f"  - Using update method: object.update({key}=value)\n"
                f"Note: The value can also be provided as a tuple (value, 'unit')."
            )
    elif key in paramaterNamesWithUnits:
        return self.__dict__.get(key, paramaterNamesWithUnits[key]) if default is None else self.__dict__.get(key, default)
    raise KeyError(f'Use getattr("{key}") to retrieve the value of {key}')
def migration(self, material, **kwargs)

interface to simulation engine: senspantankar

Expand source code
def migration(self,material,**kwargs):
    """interface to simulation engine: senspantankar"""
    from patankar.migration import senspatankar
    self._to(material) # propagate contact conditions first
    sim = senspatankar(material,self,**kwargs)
    self.lastsimulation = sim # store the last simulation result in medium
    self.lastinput = material # store the last input (material)
    sim.savestate(material,self) # store store the inputs in sim for chaining
    return sim
def refresh(self)

refresh all physcal paramaters after instantiation

Expand source code
def refresh(self):
    """refresh all physcal paramaters after instantiation"""
    for key, value in self.__dict__.items():    # we loop on instance attributes
        if key in parametersWithUnits:          # the parameter is a physical quantity
            if isinstance(value,tuple):         # the supplied value as unit
                value = check_units(value)[0]   # we convert to SI, we drop the units
                setattr(self,key,value)
            if not isinstance(value,np.ndarray):
                value = np.array([value])      # we force NumPy class
                setattr(self,key,value)
def update(self, **kwargs)

Update modifiable parameters of the foodphysics object.

Modifiable Parameters: - name (str): New name for the object. - description (str): New description. - volume (float or tuple): Volume (can be tuple like (1, "L")). - surfacearea (float or tuple): Surface area (can be tuple like (1, "cm^2")). - density (float or tuple): Density (can be tuple like (1000, "kg/m^3")). - CF0 (float or tuple): Initial concentration in the food. - contacttime (float or tuple): Contact time (can be tuple like (1, "h")). - contacttemperature (float or tuple): Temperature (can be tuple like (25, "degC")). - h (float or tuple): Mass transfer coefficient (can be tuple like (1e-6,"m/s")). - k (float or tuple): Henry-like coefficient for the food (can be tuple like (1,"a.u.")).

Expand source code
def update(self, **kwargs):
    """
    Update modifiable parameters of the foodphysics object.

    Modifiable Parameters:
        - name (str): New name for the object.
        - description (str): New description.
        - volume (float or tuple): Volume (can be tuple like (1, "L")).
        - surfacearea (float or tuple): Surface area (can be tuple like (1, "cm^2")).
        - density (float or tuple): Density (can be tuple like (1000, "kg/m^3")).
        - CF0 (float or tuple): Initial concentration in the food.
        - contacttime (float or tuple): Contact time (can be tuple like (1, "h")).
        - contacttemperature (float or tuple): Temperature (can be tuple like (25, "degC")).
        - h (float or tuple): Mass transfer coefficient (can be tuple like (1e-6,"m/s")).
        - k (float or tuple): Henry-like coefficient for the food (can be tuple like (1,"a.u.")).

    """
    if not kwargs:  # shortcut
        return self # for chaining
    def checkunits(value):
        """Helper function to convert physical quantities to SI."""
        if isinstance(value, tuple) and len(value) == 2:
            scale = check_units(value)[0]  # Convert to SI, drop unit
            return np.array([scale], dtype=float)  # Ensure NumPy array
        elif isinstance(value, (int, float, np.ndarray)):
            return np.array([value], dtype=float)  # Ensure NumPy array
        else:
            raise ValueError(f"Invalid value for physical quantity: {value}")
    # Update `name` and `description` if provided
    if "name" in kwargs:
        self.name = str(kwargs["name"])
    if "description" in kwargs:
        self.description = str(kwargs["description"])
    # Update physical properties
    for key in parametersWithUnits.keys():
        if key in kwargs:
            value = kwargs[key]
            setattr(self, key, checkunits(value))  # Ensure NumPy array in SI
    # Update medium, migrant (they accept aliases)
    lex = {
        "substance": ("substance", "migrant", "chemical", "solute"),
        "medium": ("medium", "simulant", "food", "contact"),
    }
    used_aliases = {}
    def get_value(canonical_key):
        """Find the correct alias in kwargs and return its value, or None if not found."""
        found_key = None
        for alias in lex.get(canonical_key, ()):  # Get aliases, default to empty tuple
            if alias in kwargs:
                if alias in used_aliases:
                    raise ValueError(f"Alias '{alias}' is used for multiple canonical keys!")
                found_key = alias
                used_aliases[alias] = canonical_key
                break  # Stop at the first match
        return kwargs.get(found_key, None)  # Return value if found, else None
    # Assign values only if found in kwargs
    new_substance = get_value("substance")
    new_medium = get_value("medium")
    if new_substance is not None: self.substance = new_substance
    if new_medium is not None:self.medium = new_medium
    # return
    return self  # Return self for method chaining if needed
class layer (l=None, D=None, k=None, C0=None, rho=None, T=None, lunit=None, Dunit=None, kunit=None, Cunit=None, rhounit=None, Tunit=None, layername=None, layertype=None, layermaterial=None, layercode=None, substance=None, medium=None, nmesh=None, nmeshmin=None, Dlink=None, klink=None, C0link=None, Tlink=None, llink=None, verbose=None, verbosity=2, **unresolved)

Core Functionality

This class models layers in food packaging, handling mass transfer, partitioning, and meshing for finite-volume simulations using a modified Patankar method. Layers can be assembled into multilayers via the + operator and support dynamic property linkage using layerLink.


Key Properties

  • l: Thickness of the layer (m)
  • D: Diffusion coefficient (m²/s)
  • k: Partition coefficient (dimensionless)
  • C0: Initial concentration (arbitrary units)
  • rho: Density (kg/m³)
  • T: Contact temperature (°C)
  • substance: Migrant/substance modeled for diffusion
  • medium: The food medium in contact with the layer
  • Dmodel, kmodel: Callable models for diffusion and partitioning

Methods

  • __add__(self, other): Combines two layers into a multilayer structure.
  • __mul__(self, n): Duplicates a layer n times to create a multilayer.
  • __getitem__(self, i): Retrieves a sublayer from a multilayer.
  • __setitem__(self, i, other): Replaces sublayers in a multilayer structure.
  • mesh(self): Generates a numerical mesh for finite-volume simulations.
  • struct(self): Returns a dictionary representation of the layer properties.
  • resolvename(param_value, param_key, **unresolved): Resolves synonyms for parameter names.
  • help(cls): Displays a dynamically formatted summary of input parameters.

Integration with SFPPy Modules

  • Works with migration.py for mass transfer simulations.
  • Interfaces with food.py to define food-contact conditions.
  • Uses property.py for predicting diffusion (D) and partitioning (k).
  • Connects with geometry.py for 3D packaging simulations.

Usage Example

from patankar.layer import LDPE, PP, layerLink

# Define a polymer layer with default properties
A = LDPE(l=50e-6, D=1e-14)

# Create a multilayer structure
B = PP(l=200e-6, D=1e-15)
multilayer = A + B

# Assign dynamic property linkage
k_link = layerLink("k", indices=[1], values=[10])  # Assign partition coefficient to the second layer
multilayer.klink = k_link

# Simulate migration
from patankar.migration import senspatankar
from patankar.food import ethanol
medium = ethanol()
solution = senspatankar(multilayer, medium)
solution.plotCF()

Notes

  • This class supports dynamic property inheritance, meaning D and k can be computed based on the substance defined in substance and medium.
  • The layerLink mechanism allows parameter adjustments without modifying the core object.
  • The modified finite-volume meshing ensures accurate steady-state and transient behavior.

Parameters

layername : TYPE, optional, string
DESCRIPTION. Layer Name. The default is "my layer".
layertype : TYPE, optional, string
DESCRIPTION. Layer Type. The default is "unknown type".
layermaterial : TYPE, optional, string
DESCRIPTION. Material identification . The default is "unknown material".
PHYSICAL QUANTITIES
l : TYPE, optional, scalar or tupple (value,"unit")
DESCRIPTION. Thickness. The default is 50e-6 (m).
D : TYPE, optional, scalar or tupple (value,"unit")
DESCRIPTION. Diffusivity. The default is 1e-14 (m^2/s).
k : TYPE, optional, scalar or tupple (value,"unit")
DESCRIPTION. Henry-like coefficient. The default is 1 (a.u.).
C0 : TYPE, optional, scalar or tupple (value,"unit")
DESCRIPTION. Initial concentration. The default is 1000 (a.u.).
PHYSICAL UNITS
lunit : TYPE, optional, string
DESCRIPTION. Length units. The default unit is "m.
Dunit : TYPE, optional, string
DESCRIPTION. Diffusivity units. The default unit is 1e-14 "m^2/s".
kunit : TYPE, optional, string
DESCRIPTION. Henry-like coefficient. The default unit is "a.u.".
Cunit : TYPE, optional, string
DESCRIPTION. Initial concentration. The default unit is "a.u.".

Returns

a monolayer object which can be assembled into a multilayer structure
 
Expand source code
class layer:
    """
    ------------------------------------------------------------------------------
    **Core Functionality**
    ------------------------------------------------------------------------------
    This class models layers in food packaging, handling mass transfer, partitioning,
    and meshing for finite-volume simulations using a modified Patankar method.
    Layers can be assembled into multilayers via the `+` operator and support
    dynamic property linkage using `layerLink`.

    ------------------------------------------------------------------------------
    **Key Properties**
    ------------------------------------------------------------------------------
    - `l`: Thickness of the layer (m)
    - `D`: Diffusion coefficient (m²/s)
    - `k`: Partition coefficient (dimensionless)
    - `C0`: Initial concentration (arbitrary units)
    - `rho`: Density (kg/m³)
    - `T`: Contact temperature (°C)
    - `substance`: Migrant/substance modeled for diffusion
    - `medium`: The food medium in contact with the layer
    - `Dmodel`, `kmodel`: Callable models for diffusion and partitioning

    ------------------------------------------------------------------------------
    **Methods**
    ------------------------------------------------------------------------------
    - `__add__(self, other)`: Combines two layers into a multilayer structure.
    - `__mul__(self, n)`: Duplicates a layer `n` times to create a multilayer.
    - `__getitem__(self, i)`: Retrieves a sublayer from a multilayer.
    - `__setitem__(self, i, other)`: Replaces sublayers in a multilayer structure.
    - `mesh(self)`: Generates a numerical mesh for finite-volume simulations.
    - `struct(self)`: Returns a dictionary representation of the layer properties.
    - `resolvename(param_value, param_key, **unresolved)`: Resolves synonyms for parameter names.
    - `help(cls)`: Displays a dynamically formatted summary of input parameters.

    ------------------------------------------------------------------------------
    **Integration with SFPPy Modules**
    ------------------------------------------------------------------------------
    - Works with `migration.py` for mass transfer simulations.
    - Interfaces with `food.py` to define food-contact conditions.
    - Uses `property.py` for predicting diffusion (`D`) and partitioning (`k`).
    - Connects with `geometry.py` for 3D packaging simulations.

    ------------------------------------------------------------------------------
    **Usage Example**
    ------------------------------------------------------------------------------
    ```python
    from patankar.layer import LDPE, PP, layerLink

    # Define a polymer layer with default properties
    A = LDPE(l=50e-6, D=1e-14)

    # Create a multilayer structure
    B = PP(l=200e-6, D=1e-15)
    multilayer = A + B

    # Assign dynamic property linkage
    k_link = layerLink("k", indices=[1], values=[10])  # Assign partition coefficient to the second layer
    multilayer.klink = k_link

    # Simulate migration
    from patankar.migration import senspatankar
    from patankar.food import ethanol
    medium = ethanol()
    solution = senspatankar(multilayer, medium)
    solution.plotCF()
    ```

    ------------------------------------------------------------------------------
    **Notes**
    ------------------------------------------------------------------------------
    - This class supports dynamic property inheritance, meaning `D` and `k` can be computed
      based on the substance defined in `substance` and `medium`.
    - The `layerLink` mechanism allows parameter adjustments without modifying the core object.
    - The modified finite-volume meshing ensures **accurate steady-state and transient** behavior.

    """

    # -----------------------------------------------------------------------------
    # Class attributes that can be overidden in instances.
    # Their default values are set in classes and overriden with similar
    # instance properties with @property.setter.
    # These values cannot be set during construction, but only after instantiation.
    # -----------------------------------------------------------------------------
    # These properties are essential for model predictions, they cannot be customized
    # beyond the rules accepted by the model predictors (they are not metadata)
    _physicalstate = "solid"        # solid (default), liquid, gas, porous
    _chemicalclass = "polymer"      # polymer (default), other
    _chemicalsubstance = None       # None (default), monomer for polymers
    _polarityindex = 0.0            # polarity index (roughly: 0=hexane, 10=water)

    # Low-level prediction properties (these properties are common with patankar.food)
    _lowLevelPredictionPropertyList = ["physicalstate","chemicalclass",
                                       "chemicalsubstance","polarityindex","ispolymer","issolid"]

    # --------------------------------------------------------------------
    # PRIVATE PROPERTIES (cannot be changed by the user)
    # __ read only attributes
    #  _ private attributes (not public)
    # --------------------------------------------------------------------
    __description = "LAYER object"                # description
    __version = 1.0                               # version
    __contact = "olivier.vitrac@agroparistech.fr" # contact person
    _printformat = "%0.4g"   # format to display D, k, l values


    # Synonyms dictionary: Maps alternative names to the actual parameter
    # these synonyms can be used during construction
    _synonyms = {
        "substance": {"migrant", "compound", "chemical","molecule","solute"},
        "medium": {"food","simulant","fluid","liquid","contactmedium"},
        "C0": {"CP0", "Cp0"},
        "l": {"lp", "lP"},
        "D": {"Dp", "DP"},
        "k": {"kp", "kP"},
        "T": {"temp","Temp","temperature","Temperature",
              "contacttemperature","ContactTemperature","contactTemperature"}
    }
    # Default values for parameters (note that Td cannot be changed by the end-user)
    _defaults = {
        "l": 5e-5,   # Thickness (m)
        "D": 1e-14,  # Diffusion coefficient (m^2/s)
        "k": 1.0,      # Henri-like coefficient (dimensionless)
        "C0": 1000,  # Initial concentration (arbitrary units)
        "rho": 1000, # Default density (kg/m³)
        "T": 40.0,     # Default temperature (°C)
        "Td": 25.0,    # Reference temperature for densities (°C)
        # Units (do not change)
        "lunit": "m",
        "Dunit": "m**2/s",
        "kunit": "a.u.",  # NoUnits
        "Cunit": "a.u.",  # NoUnits
        "rhounit": "kg/m**3",
        "Tunit": "degC",  # Temperatures are indicated in °C instead of K (to reduce end-user mistakes)
        # Layer properties
        "layername": "my layer",
        "layertype": "unknown type",
        "layermaterial": "unknown material",
        "layercode": "N/A",
        # Mesh parameters
        "nmeshmin": 20,
        "nmesh": 600,
        # Substance
        "substance": None,
        "simulant": None,
        # Other parameters
        "verbose": None,
        "verbosity": 2
    }

    # List units
    _parametersWithUnits = {
        "l": "m",
        "D": "m**2/s",
        "k": "a.u.",
        "C": "a.u.",
        "rhp": "kg/m**3",
        "T": "degC",
        }

    # Brief descriptions for each parameter
    _descriptionInputs = {
        "l": "Thickness of the layer (m)",
        "D": "Diffusion coefficient (m²/s)",
        "k": "Henri-like coefficient (dimensionless)",
        "C0": "Initial concentration (arbitrary units)",
        "rho": "Density of the material (kg/m³)",
        "T": "Layer temperature (°C)",
        "Td": "Reference temperature for densities (°C)",
        "lunit": "Unit of thickness (default: m)",
        "Dunit": "Unit of diffusion coefficient (default: m²/s)",
        "kunit": "Unit of Henri-like coefficient (default: a.u.)",
        "Cunit": "Unit of initial concentration (default: a.u.)",
        "rhounit": "Unit of density (default: kg/m³)",
        "Tunit": "Unit of temperature (default: degC)",
        "layername": "Name of the layer",
        "layertype": "Type of layer (e.g., polymer, ink, air)",
        "layermaterial": "Material composition of the layer",
        "layercode": "Identification code for the layer",
        "nmeshmin": "Minimum number of FV mesh elements for the layer",
        "nmesh": "Number of FV mesh elements for numerical computation",
        "verbose": "Verbose mode (None or boolean)",
        "verbosity": "Level of verbosity for debug messages (integer)"
    }

    # --------------------------------------------------------------------
    # CONSTRUCTOR OF INSTANCE PROPERTIES
    # None = missing numeric value (managed by default)
    # --------------------------------------------------------------------
    def __init__(self,
                 l=None, D=None, k=None, C0=None, rho=None, T=None,
                 lunit=None, Dunit=None, kunit=None, Cunit=None, rhounit=None, Tunit=None,
                 layername=None,layertype=None,layermaterial=None,layercode=None,
                 substance = None, medium = None,
                 # Dmodel = None, kmodel = None, they are defined via migrant (future overrides)
                 nmesh=None, nmeshmin=None, # simulation parametes
                 # link properties (for fitting and linking properties across simulations)
                 Dlink=None, klink=None, C0link=None, Tlink=None, llink=None,
                 verbose=None, verbosity=2,**unresolved):
        """

        Parameters
        ----------

        layername : TYPE, optional, string
                    DESCRIPTION. Layer Name. The default is "my layer".
        layertype : TYPE, optional, string
                    DESCRIPTION. Layer Type. The default is "unknown type".
        layermaterial : TYPE, optional, string
                        DESCRIPTION. Material identification . The default is "unknown material".
        PHYSICAL QUANTITIES
        l : TYPE, optional, scalar or tupple (value,"unit")
            DESCRIPTION. Thickness. The default is 50e-6 (m).
        D : TYPE, optional, scalar or tupple (value,"unit")
            DESCRIPTION. Diffusivity. The default is 1e-14 (m^2/s).
        k : TYPE, optional, scalar or tupple (value,"unit")
            DESCRIPTION. Henry-like coefficient. The default is 1 (a.u.).
        C0 : TYPE, optional, scalar or tupple (value,"unit")
            DESCRIPTION. Initial concentration. The default is 1000 (a.u.).
        PHYSICAL UNITS
        lunit : TYPE, optional, string
                DESCRIPTION. Length units. The default unit is "m.
        Dunit : TYPE, optional, string
                DESCRIPTION. Diffusivity units. The default unit is 1e-14 "m^2/s".
        kunit : TYPE, optional, string
                DESCRIPTION. Henry-like coefficient. The default unit is "a.u.".
        Cunit : TYPE, optional, string
                DESCRIPTION. Initial concentration. The default unit is "a.u.".
        Returns
        -------
        a monolayer object which can be assembled into a multilayer structure

        """
        # resolve alternative names used by end-users
        substance = layer.resolvename(substance,"substance",**unresolved)
        medium = layer.resolvename(medium, "medium", **unresolved)
        C0 = layer.resolvename(C0,"C0",**unresolved)
        l = layer.resolvename(l,"l",**unresolved)
        D = layer.resolvename(D,"D",**unresolved)
        k = layer.resolvename(k,"k",**unresolved)
        T = layer.resolvename(T,"T",**unresolved)

        # Assign defaults only if values are not provided
        l = l if l is not None else layer._defaults["l"]
        D = D if D is not None else layer._defaults["D"]
        k = k if k is not None else layer._defaults["k"]
        C0 = C0 if C0 is not None else layer._defaults["C0"]
        rho = rho if rho is not None else layer._defaults["rho"]
        T = T if T is not None else layer._defaults["T"]
        lunit = lunit if lunit is not None else layer._defaults["lunit"]
        Dunit = Dunit if Dunit is not None else layer._defaults["Dunit"]
        kunit = kunit if kunit is not None else layer._defaults["kunit"]
        Cunit = Cunit if Cunit is not None else layer._defaults["Cunit"]
        rhounit = rhounit if rhounit is not None else layer._defaults["rhounit"]
        Tunit = Tunit if Tunit is not None else layer._defaults["Tunit"]
        nmesh = nmesh if nmesh is not None else layer._defaults["nmesh"]
        nmeshmin = nmeshmin if nmeshmin is not None else layer._defaults["nmeshmin"]
        verbose = verbose if verbose is not None else layer._defaults["verbose"]
        verbosity = verbosity if verbosity is not None else layer._defaults["verbosity"]

        # Assign layer id properties
        layername = layername if layername is not None else layer._defaults["layername"]
        layertype = layertype if layertype is not None else layer._defaults["layertype"]
        layermaterial = layermaterial if layermaterial is not None else layer._defaults["layermaterial"]
        layercode = layercode if layercode is not None else layer._defaults["layercode"]

        # validate all physical paramaters with their units
        l,lunit = check_units(l,lunit,layer._defaults["lunit"])
        D,Dunit = check_units(D,Dunit,layer._defaults["Dunit"])
        k,kunit = check_units(k,kunit,layer._defaults["kunit"])
        C0,Cunit = check_units(C0,Cunit,layer._defaults["Cunit"])
        rho,rhounit = check_units(rho,rhounit,layer._defaults["rhounit"])
        T,Tunit = check_units(T,Tunit,layer._defaults["Tunit"])

        # set attributes: id and physical properties
        self._name = [layername]
        self._type = [layertype]
        self._material = [layermaterial]
        self._code = [layercode]
        self._nlayer = 1
        self._l = l[:1]
        self._D = D[:1]
        self._k = k[:1]
        self._C0 = C0[:1]
        self._rho = rho[:1]
        self._T = T
        self._lunit = lunit
        self._Dunit = Dunit
        self._kunit = kunit
        self._Cunit = Cunit
        self._rhounit = rhounit
        self._Tunit = Tunit
        self._nmesh = nmesh
        self._nmeshmin = nmeshmin

        # intialize links for X = D,k,C0,T,l (see documentation of layerLink)
        # A link enables the values of X to be defined and controlled outside the instance
        self._Dlink  = self._initialize_link(Dlink, "D")
        self._klink  = self._initialize_link(klink, "k")
        self._C0link = self._initialize_link(C0link, "C0")
        self._Tlink  = self._initialize_link(Tlink, "T")
        self._llink  = self._initialize_link(llink, "l")

        # set substance, medium and related D and k models
        if isinstance(substance,str):
            substance = migrant(substance)
        if substance is not None and not isinstance(substance,migrant):
            raise ValueError(f"subtance must be None a or a migrant not a {type(substance).__name__}")
        self._substance = substance
        if medium is not None:
            from patankar.food import foodlayer # local import only if needed
            if not isinstance(medium,foodlayer):
                raise ValueError(f"medium must be None or a foodlayer not a {type(medium).__name__}")
        self._medium = medium
        self._Dmodel = "default"  # do not use directly self._compute_Dmodel (force refresh)
        self._kmodel = "default"  # do not use directly self._compute_kmodel (force refresh)

        # set history for all layers merged with +
        self._layerclass_history = []
        self._ispolymer_history = []
        self._chemicalsubstance_history = []

        # set verbosity attributes
        self.verbosity = 0 if verbosity is None else verbosity
        self.verbose = verbosity>0 if verbose is None else verbose

        # we initialize the acknowlegment process for future property propagation
        self._hasbeeninherited = {}


    # --------------------------------------------------------------------
    # Helper method: initializes and validates layerLink attributes
    # (Dlink, klink, C0link, Tlink, llink)
    # --------------------------------------------------------------------
    def _initialize_link(self, link, expected_property):
        """
        Initializes and validates a layerLink attribute.

        Parameters:
        ----------
        link : layerLink or None
            The `layerLink` instance to be assigned.
        expected_property : str
            The expected property name (e.g., "D", "k", "C0", "T").

        Returns:
        -------
        layerLink or None
            The validated `layerLink` instance or None.

        Raises:
        -------
        TypeError:
            If `link` is not a `layerLink` or `None`.
        ValueError:
            If `link.property` does not match `expected_property`.
        """
        if link is None:
            return None
        if isinstance(link, layerLink):
            if link.property == expected_property:
                return link
            raise ValueError(f'{expected_property}link.property should be "{expected_property}" not "{link.property}"')
        raise TypeError(f"{expected_property}link must be a layerLink not a {type(link).__name__}")


    # --------------------------------------------------------------------
    # Class method returning help() for the end user
    # --------------------------------------------------------------------
    @classmethod
    def help(cls):
        """
        Prints a dynamically formatted summary of all input parameters,
        adjusting column widths based on content and wrapping long descriptions.
        """

        # Column Headers
        headers = ["Parameter", "Default Value", "Has Synonyms?", "Description"]
        col_widths = [len(h) for h in headers]  # Start with header widths

        # Collect Data Rows
        rows = []
        for param, default in cls._defaults.items():
            has_synonyms = "✅ Yes" if param in cls._synonyms else "❌ No"
            description = cls._descriptionInputs.get(param, "No description available")

            # Update column widths dynamically
            col_widths[0] = max(col_widths[0], len(param))
            col_widths[1] = max(col_widths[1], len(str(default)))
            col_widths[2] = max(col_widths[2], len(has_synonyms))
            col_widths[3] = max(col_widths[3], len(description))

            rows.append([param, str(default), has_synonyms, description])

        # Function to wrap text for a given column width
        def wrap_text(text, width):
            return textwrap.fill(text, width)

        # Print Table with Adjusted Column Widths
        separator = "+-" + "-+-".join("-" * w for w in col_widths) + "-+"
        print("\n### **Accepted Parameters and Defaults**\n")
        print(separator)
        print("| " + " | ".join(h.ljust(col_widths[i]) for i, h in enumerate(headers)) + " |")
        print(separator)
        for row in rows:
            # Wrap text in the description column
            row[3] = wrap_text(row[3], col_widths[3])

            # Print row
            print("| " + " | ".join(row[i].ljust(col_widths[i]) for i in range(3)) + " | " + row[3])
        print(separator)

        # Synonyms Table
        print("\n### **Parameter Synonyms**\n")
        syn_headers = ["Parameter", "Synonyms"]
        syn_col_widths = [
            max(len("Parameter"), max(len(k) for k in cls._synonyms.keys())),  # Ensure it fits "Parameter"
            max(len("Synonyms"), max(len(", ".join(v)) for v in cls._synonyms.values()))  # Ensure it fits "Synonyms"
        ]
        syn_separator = "+-" + "-+-".join("-" * w for w in syn_col_widths) + "-+"
        print(syn_separator)
        print("| " + " | ".join(h.ljust(syn_col_widths[i]) for i, h in enumerate(syn_headers)) + " |")
        print(syn_separator)
        for param, synonyms in cls._synonyms.items():
            print(f"| {param.ljust(syn_col_widths[0])} | {', '.join(synonyms).ljust(syn_col_widths[1])} |")
        print(syn_separator)


    # --------------------------------------------------------------------
    # Class method to handle ambiguous definitions from end-user
    # --------------------------------------------------------------------
    @classmethod
    def resolvename(cls, param_value, param_key, **unresolved):
        """
        Resolves the correct parameter value using known synonyms.

        - If param_value is already set (not None), return it.
        - If a synonym exists in **unresolved, assign its value.
        - If multiple synonyms of the same parameter appear in **unresolved, raise an error.
        - Otherwise, return None.

        Parameters:
        - `param_name` (any): The original value (if provided).
        - `param_key` (str): The legitimate parameter name we are resolving.
        - `unresolved` (dict): The dictionary of unrecognized keyword arguments.

        Returns:
        - The resolved value or None if not found.
        """
        if param_value is not None:
            return param_value  # The parameter is explicitly defined, do not override
        if not unresolved:      # shortcut
            return None
        resolved_value = None
        found_keys = []
        # Check if param_key itself is present in unresolved
        if param_key in unresolved:
            found_keys.append(param_key)
            resolved_value = unresolved[param_key]
        # Check if any of its synonyms are in unresolved
        if param_key in cls._synonyms:
            for synonym in cls._synonyms[param_key]:
                if synonym in unresolved:
                    found_keys.append(synonym)
                    resolved_value = unresolved[synonym]
        # Raise error if multiple synonyms were found
        if len(found_keys) > 1:
            raise ValueError(
                f"Conflicting definitions: Multiple synonyms {found_keys} were provided for '{param_key}'."
            )
        return resolved_value


    # --------------------------------------------------------------------
    # overloading binary addition (note that the output is of type layer)
    # --------------------------------------------------------------------
    def __add__(self, other):
        """ C = A + B | overload + operator """
        if isinstance(other, layer):
            res = duplicate(self)
            res._nmeshmin = min(self._nmeshmin, other._nmeshmin)
            # Propagate substance
            if self._substance is None:
                res._substance = other._substance
            else:
                if isinstance(self._substance, migrant) and isinstance(other._substance, migrant):
                    if self._substance.M != other._substance.M:
                        print("Warning: the smallest substance is propagated everywhere")
                    res._substance = self._substance if self._substance.M <= other._substance.M else other._substance
                else:
                    res._substance = None
            # Concatenate general attributes
            for p in ["_name", "_type", "_material", "_code", "_nlayer"]:
                setattr(res, p, getattr(self, p) + getattr(other, p))
            # Concatenate numeric arrays
            for p in ["_l", "_D", "_k", "_C0", "_rho", "_T"]:
                setattr(res, p, np.concatenate((getattr(self, p), getattr(other, p))))
            # Handle history tracking
            res._layerclass_history = self.layerclass_history + other.layerclass_history
            res._ispolymer_history = self.ispolymer_history + other.ispolymer_history
            res._chemicalsubstance_history = self.chemicalsubstance_history + other.chemicalsubstance_history
            # Manage layerLink attributes (Dlink, klink, C0link, Tlink, llink)
            property_map = {
                "Dlink": ("D", self.Dlink, other.Dlink),
                "klink": ("k", self.klink, other.klink),
                "C0link": ("C0", self.C0link, other.C0link),
                "Tlink": ("T", self.Tlink, other.Tlink),
                "llink": ("l", self.llink, other.llink),
            }
            for attr, (prop, self_link, other_link) in property_map.items():
                if (self_link is not None) and (other_link is not None):
                    # Case 1: Both have a link → Apply `+`
                    setattr(res, '_'+attr, self_link + other_link)
                elif self_link is not None:
                    # Case 2: Only self has a link → Use as-is
                    setattr(res, '_'+attr, self_link)
                elif other_link is not None:
                    # Case 3: Only other has a link → Shift indices and use
                    shifted_link = duplicate(other_link)
                    shifted_link.indices += len(getattr(self, prop))
                    setattr(res, '_'+attr, shifted_link)
                else:
                    # Case 4: Neither has a link → Result is None
                    setattr(res, '_'+attr, None)
            return res
        else:
            raise ValueError("Invalid layer object")


    # --------------------------------------------------------------------
    # overloading binary multiplication (note that the output is of type layer)
    # --------------------------------------------------------------------
    def __mul__(self,ntimes):
        """ nA = A*n | overload * operator """
        if isinstance(ntimes, int) and ntimes>0:
            res = duplicate(self)
            if ntimes>1:
                for n in range(1,ntimes): res += self
            return res
        else: raise ValueError("multiplicator should be a strictly positive integer")


    # --------------------------------------------------------------------
    # len method
    # --------------------------------------------------------------------
    def __len__(self):
        """ length method """
        return self._nlayer

    # --------------------------------------------------------------------
    # object indexing (get,set) method
    # --------------------------------------------------------------------
    def __getitem__(self,i):
        """ get indexing method """
        res = duplicate(self)
        # check indices
        isscalar = isinstance(i,int)
        if isinstance(i,slice):
            if i.step==None: j = list(range(i.start,i.stop))
            else: j = list(range(i.start,i.stop,i.step))
            res._nlayer = len(j)
        if isinstance(i,int): res._nlayer = 1
        # pick indices for each property
        for p in ["_name","_type","_material","_l","_D","_k","_C0"]:
            content = getattr(self,p)
            try:
                if isscalar: setattr(res,p,content[i:i+1])
                else: setattr(res,p,content[i])
            except IndexError as err:
                if self.verbosity>0 and self.verbose:
                    print("bad layer object indexing: ",err)
        return res

    def __setitem__(self,i,other):
        """ set indexing method """
        # check indices
        if isinstance(i,slice):
            if i.step==None: j = list(range(i.start,i.stop))
            else: j = list(range(i.start,i.stop,i.step))
        elif isinstance(i,int): j = [i]
        else:raise IndexError("invalid index")
        islayer = isinstance(other,layer)
        isempty = not islayer and isinstance(other,list) and len(other)<1
        if isempty:         # empty right hand side
            for p in ["_name","_type","_material","_l","_D","_k","_C0"]:
                content = getattr(self,p)
                try:
                    newcontent = [content[k] for k in range(self._nlayer) if k not in j]
                except IndexError as err:
                    if self.verbosity>0 and self.verbose:
                        print("bad layer object indexing: ",err)
                if isinstance(content,np.ndarray) and not isinstance(newcontent,np.ndarray):
                    newcontent = np.array(newcontent)
                setattr(self,p,newcontent)
            self._nlayer = len(newcontent)
        elif islayer:        # islayer right hand side
            nk1 = len(j)
            nk2 = other._nlayer
            if nk1 != nk2:
                raise IndexError("the number of elements does not match the number of indices")
            for p in ["_name","_type","_material","_l","_D","_k","_C0"]:
                content1 = getattr(self,p)
                content2 = getattr(other,p)
                for k in range(nk1):
                    try:
                        content1[j[k]] = content2[k]
                    except IndexError as err:
                        if self.verbosity>0 and self.verbose:
                            print("bad layer object indexing: ",err)
                setattr(self,p,content1)
        else:
            raise ValueError("only [] or layer object are accepted")


    # --------------------------------------------------------------------
    # Getter methods (show private/hidden properties and meta-properties)
    # --------------------------------------------------------------------
    # Return class or instance attributes
    @property
    def physicalstate(self): return self._physicalstate
    @property
    def chemicalclass(self): return self._chemicalclass
    @property
    def chemicalsubstance(self): return self._chemicalsubstance
    @property
    def polarityindex(self):
        # rescaled to match predictions - standard scale [0,10.2] - predicted scale [0,7.12]
        return self._polarityindex * migrant("water").polarityindex/10.2
    @property
    def ispolymer(self): return self.chemicalclass == "polymer"
    @property
    def issolid(self): return self.physicalstate == "solid"
    @property
    def layerclass_history(self):
        return self._layerclass_history if self._layerclass_history != [] else [self.layerclass]
    @property
    def ispolymer_history(self):
        return self._ispolymer_history if self._ispolymer_history != [] else [self.ispolymer]
    @property
    def chemicalsubstance_history(self):
        return self._chemicalsubstance_history if self._chemicalsubstance_history != [] else [self.chemicalsubstance]
    @property
    def layerclass(self): return type(self).__name__
    @property
    def name(self): return self._name
    @property
    def type(self): return self._type
    @property
    def material(self): return self._material
    @property
    def code(self): return self._code
    @property
    def l(self): return self._l if not self.hasllink else self.llink.getfull(self._l)
    @property
    def D(self):
        Dtmp = None
        if self.Dmodel == "default": # default behavior
            Dtmp = self._compute_Dmodel()
        elif callable(self.Dmodel): # user override
            Dtmp = self.Dmodel()
        if Dtmp is not None:
            Dtmp = np.full_like(self._D, Dtmp,dtype=np.float64)
            if self.hasDlink:
                return self.Dlink.getfull(Dtmp) # substitution rules are applied as defined in Dlink
            else:
                return Dtmp
        return self._D if not self.hasDlink else self.Dlink.getfull(self._D)
    @property
    def k(self):
        ktmp = None
        if self.kmodel == "default": # default behavior
            ktmp = self._compute_kmodel()
        elif callable(self.kmodel): # user override
            ktmp = self.kmodel()
        if ktmp is not None:
            ktmp = np.full_like(self._k, ktmp,dtype=np.float64)
            if self.hasklink:
                return self.klink.getfull(ktmp) # substitution rules are applied as defined in klink
            else:
                return ktmp
        return self._k if not self.hasklink else self.klink.getfull(self._k)
    @property
    def C0(self): return self._C0 if not self.hasC0link else self.COlink.getfull(self._C0)
    @property
    def rho(self): return self._rho
    @property
    def T(self): return self._T if not self.hasTlink else self.Tlink.getfull(self._T)
    @property
    def TK(self): return self._T+T0K
    @property
    def lunit(self): return self._lunit
    @property
    def Dunit(self): return self._Dunit
    @property
    def kunit(self): return self._kunit
    @property
    def Cunit(self): return self._Cunit
    @property
    def rhounit(self): return self._rhounit
    @property
    def Tunit(self): return self._Tunit
    @property
    def TKunit(self): return "K"
    @property
    def n(self): return self._nlayer
    @property
    def nmesh(self): return self._nmesh
    @property
    def nmeshmin(self): return self._nmeshmin
    @property
    def resistance(self): return self.l*self.k/self.D
    @property
    def permeability(self): return self.D/(self.l*self.k)
    @property
    def lag(self): return self.l**2/(6*self.D)
    @property
    def pressure(self): return self.k*self.C0
    @property
    def thickness(self): return sum(self.l)
    @property
    def concentration(self): return sum(self.l*self.C0)/self.thickness
    @property
    def relative_thickness(self): return self.l/self.thickness
    @property
    def relative_resistance(self): return self.resistance/sum(self.resistance)
    @property
    def rank(self): return (self.n-np.argsort(np.array(self.resistance))).tolist()
    @property
    def referencelayer(self): return np.argmax(self.resistance)
    @property
    def lreferencelayer(self): return self.l[self.referencelayer]
    @property
    def Foscale(self): return self.D[self.referencelayer]/self.lreferencelayer**2

    # substance/solute/migrant/chemical (of class migrant or None)
    @property
    def substance(self): return self._substance
    @property
    def migrant(self): return self.substance # alias/synonym of substance
    @property
    def solute(self): return self.substance # alias/synonym of substance
    @property
    def chemical(self): return self.substance # alias/synonym of substance
    # medium (of class foodlayer or None)
    @property
    def medium(self): return self._medium

    # Dmodel and kmodel returned as properties (they are lambda functions)
    # Note about the implementation: They are attributes that remain None or a callable function
    # polymer and mass are udpdated on the fly (the code loops over all layers)
    @property
    def Dmodel(self):
        return self._Dmodel
    @Dmodel.setter
    def Dmodel(self,value):
        if value is None or callable(value):
            self._Dmodel = value
        else:
            raise ValueError("Dmodel must be None or a callable function")
    @property
    def _compute_Dmodel(self):
        """Return a callable function that evaluates D with updated parameters."""
        if not isinstance(self._substance,migrant) or self._substance.Deval() is None:
            return lambda **kwargs: None  # Return a function that always returns None
        template = self._substance.Dtemplate.copy()
        template.update()
        def func(**kwargs):
            D = np.empty_like(self._D)
            for (i,),T in np.ndenumerate(self.T.ravel()): # loop over all layers via T
                template.update(polymer=self.layerclass_history[i],T=T) # updated layer properties
                # inherit eventual user parameters
                D[i] = self._substance.D.evaluate(**dict(template, **kwargs))
            return D
        return func # we return a callable function not a value

    # polarity index and molar volume are updated on the fly
    @property
    def kmodel(self):
        return self._kmodel
    @kmodel.setter
    def kmodel(self,value):
        if value is None or callable(value):
            self._kmodel = value
        else:
            raise ValueError("kmodel must be None or a callable function")
    @property
    def _compute_kmodel(self):
        """Return a callable function that evaluates k with updated parameters."""
        if not isinstance(self._substance,migrant) or self._substance.keval() is None:
            return lambda **kwargs: None  # Return a function that always returns None
        template = self._substance.ktemplate.copy()
        # add solute (i) properties: Pi and Vi have been set by loadpubchem already
        template.update(ispolymer = True)
        def func(**kwargs):
            k = np.full_like(self._k,self._k,dtype=np.float64)
            for (i,),T in np.ndenumerate(self.T.ravel()): # loop over all layers via T
                if not self.ispolymer_history[i]: # k can be evaluated only in polymes via FH theory
                    continue # we keep the existing k value
                # add/update monomer properties
                monomer = migrant(self.chemicalsubstance_history[i])
                template.update(Pk = monomer.polarityindex,
                                Vk = monomer.molarvolumeMiller)
                # inherit eventual user parameters
                k[i] = self._substance.k.evaluate(**dict(template, **kwargs))
            return k
        return func # we return a callable function not a value


    @property
    def hasDmodel(self):
        """Returns True if a Dmodel has been defined"""
        if hasattr(self, "_compute_Dmodel"):
            if self._compute_Dmodel() is not None:
                return True
            elif callable(self.Dmodel):
                return self.Dmodel() is not None
        return False

    @property
    def haskmodel(self):
        """Returns True if a kmodel has been defined"""
        if hasattr(self, "_compute_kmodel"):
            if self._compute_kmodel() is not None:
                return True
            elif callable(self.kmodel):
                return self.kmodel() is not None
        return False


    # --------------------------------------------------------------------
    # comparators based resistance
    # --------------------------------------------------------------------
    def __eq__(self, o):
        value1 = self.resistance if self._nlayer>1 else self.resistance[0]
        if isinstance(o,layer):
            value2 = o.resistance if o._nlayer>1 else o.resistance[0]
        else:
            value2 = o
        return value1==value2

    def __ne__(self, o):
        value1 = self.resistance if self._nlayer>1 else self.resistance[0]
        if isinstance(o,layer):
            value2 = o.resistance if o._nlayer>1 else o.resistance[0]
        else:
            value2 = o
        return value1!=value2

    def __lt__(self, o):
        value1 = self.resistance if self._nlayer>1 else self.resistance[0]
        if isinstance(o,layer):
            value2 = o.resistance if o._nlayer>1 else o.resistance[0]
        else:
            value2 = o
        return value1<value2

    def __gt__(self, o):
        value1 = self.resistance if self._nlayer>1 else self.resistance[0]
        if isinstance(o,layer):
            value2 = o.resistance if o._nlayer>1 else o.resistance[0]
        else:
            value2 = o
        return value1>value2

    def __le__(self, o):
        value1 = self.resistance if self._nlayer>1 else self.resistance[0]
        if isinstance(o,layer):
            value2 = o.resistance if o._nlayer>1 else o.resistance[0]
        else:
            value2 = o
        return value1<=value2

    def __ge__(self, o):
        value1 = self.resistance if self._nlayer>1 else self.resistance[0]
        if isinstance(o,layer):
            value2 = o.resistance if o._nlayer>1 else o.resistance[0]
        else:
            value2 = o
        return value1>=value2


    # --------------------------------------------------------------------
    # Generates mesh
    # --------------------------------------------------------------------
    def mesh(self,nmesh=None,nmeshmin=None):
        """ nmesh() generates mesh based on nmesh and nmeshmin, nmesh(nmesh=value,nmeshmin=value) """
        if nmesh==None: nmesh = self.nmesh
        if nmeshmin==None: nmeshmin = self.nmeshmin
        if nmeshmin>nmesh: nmeshmin,nmesh = nmesh, nmeshmin
        # X = mesh distribution (number of nodes per layer)
        X = np.ones(self._nlayer)
        for i in range(1,self._nlayer):
           X[i] = X[i-1]*(self.permeability[i-1]*self.l[i])/(self.permeability[i]*self.l[i-1])
        X = np.maximum(nmeshmin,np.ceil(nmesh*X/sum(X)))
        X = np.round((X/sum(X))*nmesh).astype(int)
        # do the mesh
        x0 = 0
        mymesh = []
        for i in range(self._nlayer):
            mymesh.append(mesh(self.l[i]/self.l[self.referencelayer],X[i],x0=x0,index=i))
            x0 += self.l[i]
        return mymesh

    # --------------------------------------------------------------------
    # Setter methods and tools to validate inputs checknumvalue and checktextvalue
    # --------------------------------------------------------------------
    @physicalstate.setter
    def physicalstate(self,value):
        if value not in ("solid","liquid","gas","supercritical"):
            raise ValueError(f"physicalstate must be solid/liduid/gas/supercritical and not {value}")
        self._physicalstate = value
    @chemicalclass.setter
    def chemicalclass(self,value):
        if value not in ("polymer","other"):
            raise ValueError(f"chemicalclass must be polymer/oher and not {value}")
        self._chemicalclass= value
    @chemicalsubstance.setter
    def chemicalsubstance(self,value):
        if not isinstance(value,str):
            raise ValueError("chemicalsubtance must be str not a {type(value).__name__}")
        self._chemicalsubstance= value
    @polarityindex.setter
    def polarityindex(self,value):
        if not isinstance(value,(float,int)):
            raise ValueError("polarity index must be float not a {type(value).__name__}")
        self._polarityindex= value

    def checknumvalue(self,value,ExpectedUnits=None):
        """ returns a validate value to set properties """
        if isinstance(value,tuple):
            value = check_units(value,ExpectedUnits=ExpectedUnits)
        if isinstance(value,int): value = float(value)
        if isinstance(value,float): value = np.array([value])
        if isinstance(value,list): value = np.array(value)
        if len(value)>self._nlayer:
            value = value[:self._nlayer]
            if self.verbosity>1 and self.verbose:
                print('dimension mismatch, the extra value(s) has been removed')
        elif len(value)<self._nlayer:
            value = np.concatenate((value,value[-1:]*np.ones(self._nlayer-len(value))))
            if self.verbosity>1 and self.verbose:
                print('dimension mismatch, the last value has been repeated')
        return value

    def checktextvalue(self,value):
        """ returns a validate value to set properties """
        if not isinstance(value,list): value = [value]
        if len(value)>self._nlayer:
            value = value[:self._nlayer]
            if self.verbosity>1 and self.verbose:
                print('dimension mismatch, the extra entry(ies) has been removed')
        elif len(value)<self._nlayer:
            value = value + value[-1:]*(self._nlayer-len(value))
            if self.verbosity>1 and self.verbose:
                print('dimension mismatch, the last entry has been repeated')
        return value

    @l.setter
    def l(self,value): self._l =self.checknumvalue(value,layer._defaults["lunit"])
    @D.setter
    def D(self,value): self._D=self.checknumvalue(value,layer._defaults["Dunit"])
    @k.setter
    def k(self,value): self._k =self.checknumvalue(value,layer._defaults["kunit"])
    @C0.setter
    def C0(self,value): self._C0 =self.checknumvalue(value,layer._defaults["Cunit"])
    @rho.setter
    def rho(self,value): self._rho =self.checknumvalue(value,layer._defaults["rhounit"])
    @T.setter
    def T(self,value): self._T =self.checknumvalue(value,layer._defaults["Tunit"])
    @name.setter
    def name(self,value): self._name =self.checktextvalue(value)
    @type.setter
    def type(self,value): self._type =self.checktextvalue(value)
    @material.setter
    def material(self,value): self._material =self.checktextvalue(value)
    @nmesh.setter
    def nmesh(self,value): self._nmesh = max(value,self._nlayer*self._nmeshmin)
    @nmeshmin.setter
    def nmeshmin(self,value): self._nmeshmin = max(value,round(self._nmesh/(2*self._nlayer)))
    @substance.setter
    def substance(self,value):
        if isinstance(value,str):
            value = migrant(value)
        if not isinstance(value,migrant) and value is not None:
            raise TypeError(f"value must be a migrant not a {type(value).__name__}")
        self._substance = value
    @migrant.setter
    def migrant(self,value):
        self.substance = value
    @chemical.setter
    def chemical(self,value):
        self.substance = value
    @solute.setter
    def solute(self,value):
        self.substance = value
    @medium.setter
    def medium(self,value):
        from patankar.food import foodlayer
        if not isinstance(value,foodlayer):
            raise TypeError(f"value must be a foodlayer not a {type(value).__name__}")
        self._medium = value

    # --------------------------------------------------------------------
    #  getter and setter for links: Dlink, klink, C0link, Tlink, llink
    # --------------------------------------------------------------------
    @property
    def Dlink(self):
        """Getter for Dlink"""
        return self._Dlink
    @Dlink.setter
    def Dlink(self, value):
        """Setter for Dlink"""
        self._Dlink = self._initialize_link(value, "D")
        if isinstance(value,layerLink): value._maxlength = self.n
    @property
    def klink(self):
        """Getter for klink"""
        return self._klink
    @klink.setter
    def klink(self, value):
        """Setter for klink"""
        self._klink = self._initialize_link(value, "k")
        if isinstance(value,layerLink): value._maxlength = self.n
    @property
    def C0link(self):
        """Getter for C0link"""
        return self._C0link
    @C0link.setter
    def C0link(self, value):
        """Setter for C0link"""
        self._C0link = self._initialize_link(value, "C0")
        if isinstance(value,layerLink): value._maxlength = self.n
    @property
    def Tlink(self):
        """Getter for Tlink"""
        return self._Tlink
    @Tlink.setter
    def Tlink(self, value):
        """Setter for Tlink"""
        self._Tlink = self._initialize_link(value, "T")
        if isinstance(value,layerLink): value._maxlength = self.n
    @property
    def llink(self):
        """Getter for llink"""
        return self._llink
    @llink.setter
    def llink(self, value):
        """Setter for llink"""
        self._llink = self._initialize_link(value, "l")
        if isinstance(value,layerLink): value._maxlength = self.n
    @property
    def hasDlink(self):
        """Returns True if Dlink is defined"""
        return self.Dlink is not None
    @property
    def hasklink(self):
        """Returns True if klink is defined"""
        return self.klink is not None
    @property
    def hasC0link(self):
        """Returns True if C0link is defined"""
        return self.C0link is not None
    @property
    def hasTlink(self):
        """Returns True if Tlink is defined"""
        return self.Tlink is not None
    @property
    def hasllink(self):
        """Returns True if llink is defined"""
        return self.llink is not None

    # --------------------------------------------------------------------
    # returned LaTeX-formated properties
    # --------------------------------------------------------------------
    def Dlatex(self, numdigits=4, units=r"\mathrm{m^2 \cdot s^{-1}}",prefix="D=",mathmode="$"):
        """Returns diffusivity values (D) formatted in LaTeX scientific notation."""
        return [format_scientific_latex(D, numdigits, units, prefix,mathmode) for D in self.D]

    def klatex(self, numdigits=4, units="a.u.",prefix="k=",mathmode="$"):
        """Returns Henry-like values (k) formatted in LaTeX scientific notation."""
        return [format_scientific_latex(k, numdigits, units, prefix,mathmode) for k in self.k]

    def llatex(self, numdigits=4, units="m",prefix="l=",mathmode="$"):
        """Returns thickness values (k) formatted in LaTeX scientific notation."""
        return [format_scientific_latex(l, numdigits, units, prefix,mathmode) for l in self.l]

    def C0latex(self, numdigits=4, units="a.u.",prefix="C0=",mathmode="$"):
        """Returns Initial Concentratoin values (C0) formatted in LaTeX scientific notation."""
        return [format_scientific_latex(c, numdigits, units, prefix,mathmode) for c in self.C0]

    # --------------------------------------------------------------------
    # hash methods (assembly and layer-by-layer)
    # note that list needs to be converted into tuples to be hashed
    # --------------------------------------------------------------------
    def __hash__(self):
        """ hash layer-object (assembly) method """
        return hash((tuple(self._name),
                     tuple(self._type),
                     tuple(self._material),
                     tuple(self._l),
                     tuple(self._D),
                     tuple(self.k),
                     tuple(self._C0),
                     tuple(self._rho)))

    # layer-by-layer @property = decoration to consider it
    # as a property instead of a method/attribute
    # comprehension for n in range(self._nlayer) applies it to all layers
    @property
    def hashlayer(self):
        """ hash layer (layer-by-layer) method """
        return [hash((self._name[n],
                      self._type[n],
                      self._material[n],
                      self._l[n],
                      self._D[n],
                      self.k[n],
                      self._C0[n],
                      self._rho[n]))
                for n in range(self._nlayer)
                ]


    # --------------------------------------------------------------------
    # repr method (since the getter are defined, the '_' is dropped)
    # --------------------------------------------------------------------
    # density and temperature are not shown
    def __repr__(self):
        """ disp method """
        print("\n[%s version=%0.4g, contact=%s]" % (self.__description,self.__version,self.__contact))
        if self._nlayer==0:
            print("empty %s" % (self.__description))
        else:
            hasDmodel, haskmodel = self.hasDmodel, self.haskmodel
            hasDlink, hasklink, hasC0link, hasTlink, hasllink = self.hasDlink, self.hasklink, self.hasC0link, self.hasTlink, self.hasllink
            properties_hasmodel = {"l":False,"D":hasDmodel,"k":haskmodel,"C0":False}
            properties_haslink = {"l":hasllink,"D":hasDlink,"k":hasklink,"C0":hasC0link,"T":hasTlink}
            if hasDmodel or haskmodel:
                properties_hasmodel["T"] = False
            fmtval = '%10s: '+self._printformat+" [%s]"
            fmtstr = '%10s= %s'
            if self._nlayer==1:
                print(f'monolayer of {self.__description}:')
            else:
                print(f'{self._nlayer}-multilayer of {self.__description}:')
            for n in range(1,self._nlayer+1):
                modelinfo = {
                    "D": f"{self._substance.D.__name__}({self.layerclass_history[n-1]},{self._substance},T={float(self.T[0])} {self.Tunit})" if hasDmodel else "",
                    "k": f"{self._substance.k.__name__}(<{self.chemicalsubstance_history[n-1]}>,{self._substance})" if haskmodel else "",
                    }
                print('-- [ layer %d of %d ] ---------- barrier rank=%d --------------'
                      % (n,self._nlayer,self.rank[n-1]))
                for p in ["name","type","material","code"]:
                    v = getattr(self,p)
                    print('%10s: "%s"' % (p,v[n-1]),flush=True)
                for p in properties_hasmodel.keys():
                    v = getattr(self,p)                 # value
                    vunit = getattr(self,p[0]+"unit")   # value unit
                    print(fmtval % (p,v[n-1],vunit),flush=True)
                    isoverridenbylink = False
                    if properties_haslink[p]:
                        isoverridenbylink = not np.isnan(getattr(self,p+"link").get(n-1))
                    if isoverridenbylink:
                        print(fmtstr % ("",f"value controlled by {p}link[{n-1}] (external)"),flush=True)
                    elif properties_hasmodel[p]:
                        print(fmtstr % ("",modelinfo[p]),flush=True)
        return str(self)

    def __str__(self):
        """Formatted string representation of layer"""
        all_identical = len(set(self.layerclass_history)) == 1
        cls = self.__class__.__name__ if all_identical else "multilayer"
        return f"<{cls} with {self.n} layer{'s' if self.n>1 else ''}: {self.name}>"

    # --------------------------------------------------------------------
    # Returns the equivalent dictionary from an object for debugging
    # --------------------------------------------------------------------
    def _todict(self):
        """ returns the equivalent dictionary from an object """
        return dict((key, getattr(self, key)) for key in dir(self) if key not in dir(self.__class__))
    # --------------------------------------------------------------------

    # --------------------------------------------------------------------
    # Simplify layers by collecting similar ones
    # --------------------------------------------------------------------
    def simplify(self):
        """ merge continuous layers of the same type """
        nlayer = self._nlayer
        if nlayer>1:
           res = self[0]
           ires = 0
           ireshash = res.hashlayer[0]
           for i in range(1,nlayer):
               if self.hashlayer[i]==ireshash:
                   res.l[ires] = res.l[ires]+self.l[i]
               else:
                   res = res + self[i]
                   ires = ires+1
                   ireshash = self.hashlayer[i]
        else:
             res = self.copy()
        return res

    # --------------------------------------------------------------------
    # Split layers into a tuple
    # --------------------------------------------------------------------
    def split(self):
        """ split layers """
        out = ()
        if self._nlayer>0:
            for i in range(self._nlayer):
                out = out + (self[i],) # (,) special syntax for tuple singleton
        return out

    # --------------------------------------------------------------------
    # deepcopy
    # --------------------------------------------------------------------
    def copy(self,**kwargs):
        """
        Creates a deep copy of the current layer instance.

        Returns:
        - layer: A new layer instance identical to the original.
        """
        return duplicate(self).update(**kwargs)

    # --------------------------------------------------------------------
    # update contact conditions from a foodphysics instance (or do the reverse)
    # material << medium
    # material@medium
    # --------------------------------------------------------------------
    def _from(self,medium=None):
        """Propagates contact conditions from food instance"""
        from patankar.food import foodphysics, foodlayer
        if not isinstance(medium,foodphysics):
            raise TypeError(f"medium must be a foodphysics, foodlayer not a {type(medium).__name__}")
        if not hasattr(medium, "contacttemperature"):
            medium.contacttemperature = self.T[0]
        T = medium.get_param("contacttemperature",40,acceptNone=False)
        self.T = np.full_like(self.T,T,dtype=np.float64)
        if medium.substance is not None:
            self.substance = medium.substance
        else:
            medium.substance = self.substance # do the reverse if substance is not defined in medium
        # inherit fully medium only if it is a foodlayer (foodphysics is too restrictive)
        if isinstance(medium,foodlayer):
            self.medium = medium

    # overload operator <<
    def __lshift__(self, medium):
        """Overloads << to propagate contact conditions from food."""
        self._from(medium)
    # overload operator @ (same as <<)
    def __matmul__(self, medium):
        """Overloads @ to propagate contact conditions from food."""
        self._from(medium)


    # --------------------------------------------------------------------
    # Inheritance registration mechanism associated with food >> layer
    # It is used by food, not by layer (please refer to food.py).
    # Note that layer >> food means mass transfer simulation
    # --------------------------------------------------------------------
    def acknowledge(self, what=None, category=None):
        """
        Register inherited properties under a given category.

        Parameters:
        -----------
        what : str or list of str or a set
            The properties or attributes that have been inherited.
        category : str
            The category under which the properties are grouped.
        """
        if category is None or what is None:
            raise ValueError("Both 'what' and 'category' must be provided.")
        if isinstance(what, str):
            what = {what}  # Convert string to a set
        elif isinstance(what, list):
            what = set(what)  # Convert list to a set for uniqueness
        elif not isinstance(what,set):
            raise TypeError("'what' must be a string, a list, or a set of strings.")
        if category not in self._hasbeeninherited:
            self._hasbeeninherited[category] = set()
        self._hasbeeninherited[category].update(what)

    # --------------------------------------------------------------------
    # migration simulation overloaded as sim = layer >> food
    # using layer >> food without output works also.
    # The result is stored in food.lastsimulation
    # --------------------------------------------------------------------
    def contact(self,medium,**kwargs):
        """alias to migration method"""
        return self.migration(medium,**kwargs)

    def migration(self,medium=None,**kwargs):
        """interface to simulation engine: senspantankar"""
        from patankar.food import foodphysics
        from patankar.migration import senspatankar
        if medium is None:
            medium = self.medium
        if not isinstance(medium,foodphysics):
            raise TypeError(f"medium must be a foodphysics not a {type(medium).__name__}")
        sim = senspatankar(self,medium,**kwargs)
        medium.lastsimulation = sim # store the last simulation result in medium
        medium.lastinput = self # store the last input (self)
        sim.savestate(self,medium) # store store the inputs in sim for chaining
        return sim

    # overloading operation
    def __rshift__(self, medium):
        """Overloads >> to propagate migration to food."""
        from patankar.food import foodphysics
        if not isinstance(medium,foodphysics):
            raise TypeError(f"medium must be a foodphysics object not a {type(medium).__name__}")
        return self.contact(medium)

    # --------------------------------------------------------------------
    # Safe update method
    # --------------------------------------------------------------------
    def update(self, **kwargs):
        """
        Update layer parameters following strict validation rules.

        Rules:
        1) key should be listed in self._defaults
        2) for some keys, synonyms are acceptable as reported in self._synonyms
        3) values cannot be None if they were not None in _defaults
        4) values should be str if they were initially str, idem with bool
        5) values which were numeric (int, float, np.ndarray) should remain numeric.
        6) lists are acceptable as numeric arrays
        7) all numerical (float, np.ndarray, list) except int must be converted into numpy arrays.
           Values which were int in _defaults must remain int and an error should be raised
           if a float value is proposed.
        8) keys listed in _parametersWithUnits can be assigned with tuples (value, "unit").
           They will be converted automatically with check_units(value).
        9) for parameters with a default value None, any value is acceptable
        10) A clear error message should be displayed for any bad value showing the
            current value of the parameter and its default value.
        """

        if not kwargs:  # shortcut
            return self # for chaining

        param_counts = {key: 0 for key in self._defaults}  # Track how many times each param is set

        def resolve_key(key):
            """Resolve key considering synonyms and check for duplicates."""
            for main_key, synonyms in self._synonyms.items():
                if key == main_key or key in synonyms:
                    param_counts[main_key] += 1
                    return main_key
            param_counts[key] += 1
            return key

        def validate_value(key, value):
            """Validate and process the value according to the rules."""
            default_value = self._defaults[key]

            # Rule 3: values cannot be None if they were not None in _defaults
            if value is None and default_value is not None:
                raise ValueError(f"Invalid value for '{key}': None is not allowed. "
                                 f"Current: {getattr(self, key)}, Default: {default_value}")

            # Rule 9: If default is None, any value is acceptable
            if default_value is None:
                return value

            # Rule 4 & 5: Ensure type consistency (str, bool, or numeric types)
            if isinstance(default_value, str) and not isinstance(value, str):
                raise TypeError(f"Invalid type for '{key}': Expected str, got {type(value).__name__}. "
                                f"Current: {getattr(self, key)}, Default: {default_value}")
            if isinstance(default_value, bool) and not isinstance(value, bool):
                raise TypeError(f"Invalid type for '{key}': Expected bool, got {type(value).__name__}. "
                                f"Current: {getattr(self, key)}, Default: {default_value}")

            # Rule 6 & 7: Convert numeric types properly
            if isinstance(default_value, (int, float, np.ndarray)):
                if isinstance(value, list):
                    value = np.array(value)

                if isinstance(default_value, int):
                    if isinstance(value, float) or (isinstance(value, np.ndarray) and np.issubdtype(value.dtype, np.floating)):
                        raise TypeError(f"Invalid type for '{key}': Expected integer, got float. "
                                        f"Current: {getattr(self, key)}, Default: {default_value}")
                    if isinstance(value, (int, np.integer)):
                        return int(value)  # Ensure it remains an int
                    raise TypeError(f"Invalid type for '{key}': Expected integer, got {type(value).__name__}. "
                                    f"Current: {getattr(self, key)}, Default: {default_value}")

                if isinstance(value, (int, float, list, np.ndarray)):
                    return np.array(value, dtype=float)  # Convert everything to np.array for floats

                raise TypeError(f"Invalid type for '{key}': Expected numeric, got {type(value).__name__}. "
                                f"Current: {getattr(self, key)}, Default: {default_value}")

            # Rule 8: Convert units if applicable
            if key in self._parametersWithUnits and isinstance(value, tuple):
                value, unit = value
                converted_value, _ = check_units((value, unit), ExpectedUnits=self._parametersWithUnits[key])
                return converted_value

            return value

        # Apply updates while tracking parameter occurrences
        for key, value in kwargs.items():
            resolved_key = resolve_key(key)

            if resolved_key not in self._defaults:
                raise KeyError(f"Invalid key '{key}'. Allowed keys: {list(self._defaults.keys())}.")

            try:
                validated_value = validate_value(resolved_key, value)
                setattr(self, resolved_key, validated_value)
            except (TypeError, ValueError) as e:
                raise ValueError(f"Error updating '{key}': {e}")

        # Ensure that no parameter was set multiple times due to synonyms
        duplicate_keys = [k for k, v in param_counts.items() if v > 1]
        if duplicate_keys:
            raise ValueError(f"Duplicate assignment detected for parameters: {duplicate_keys}. "
                             "Use only one synonym per parameter.")

        return self # to enable chaining

    # Basic tool for debugging
    # --------------------------------------------------------------------
    # STRUCT method - returns the equivalent dictionary from an object
    # --------------------------------------------------------------------
    def struct(self):
        """ returns the equivalent dictionary from an object """
        return dict((key, getattr(self, key)) for key in dir(self) if key not in dir(self.__class__))

Subclasses

  • patankar.layer.AdhesiveAcrylate
  • patankar.layer.AdhesiveEVA
  • patankar.layer.AdhesiveNaturalRubber
  • patankar.layer.AdhesivePU
  • patankar.layer.AdhesivePVAC
  • patankar.layer.AdhesiveSyntheticRubber
  • patankar.layer.AdhesiveVAE
  • patankar.layer.Cardboard
  • patankar.layer.HDPE
  • patankar.layer.HIPS
  • patankar.layer.LDPE
  • patankar.layer.LLDPE
  • patankar.layer.PA6
  • patankar.layer.PA66
  • patankar.layer.PBT
  • patankar.layer.PEN
  • patankar.layer.PP
  • patankar.layer.PPrubber
  • patankar.layer.PS
  • patankar.layer.Paper
  • patankar.layer.SBS
  • patankar.layer.air
  • patankar.layer.gPET
  • patankar.layer.oPP
  • patankar.layer.plasticizedPVC
  • patankar.layer.rPET
  • patankar.layer.rigidPVC

Static methods

def help()

Prints a dynamically formatted summary of all input parameters, adjusting column widths based on content and wrapping long descriptions.

Expand source code
@classmethod
def help(cls):
    """
    Prints a dynamically formatted summary of all input parameters,
    adjusting column widths based on content and wrapping long descriptions.
    """

    # Column Headers
    headers = ["Parameter", "Default Value", "Has Synonyms?", "Description"]
    col_widths = [len(h) for h in headers]  # Start with header widths

    # Collect Data Rows
    rows = []
    for param, default in cls._defaults.items():
        has_synonyms = "✅ Yes" if param in cls._synonyms else "❌ No"
        description = cls._descriptionInputs.get(param, "No description available")

        # Update column widths dynamically
        col_widths[0] = max(col_widths[0], len(param))
        col_widths[1] = max(col_widths[1], len(str(default)))
        col_widths[2] = max(col_widths[2], len(has_synonyms))
        col_widths[3] = max(col_widths[3], len(description))

        rows.append([param, str(default), has_synonyms, description])

    # Function to wrap text for a given column width
    def wrap_text(text, width):
        return textwrap.fill(text, width)

    # Print Table with Adjusted Column Widths
    separator = "+-" + "-+-".join("-" * w for w in col_widths) + "-+"
    print("\n### **Accepted Parameters and Defaults**\n")
    print(separator)
    print("| " + " | ".join(h.ljust(col_widths[i]) for i, h in enumerate(headers)) + " |")
    print(separator)
    for row in rows:
        # Wrap text in the description column
        row[3] = wrap_text(row[3], col_widths[3])

        # Print row
        print("| " + " | ".join(row[i].ljust(col_widths[i]) for i in range(3)) + " | " + row[3])
    print(separator)

    # Synonyms Table
    print("\n### **Parameter Synonyms**\n")
    syn_headers = ["Parameter", "Synonyms"]
    syn_col_widths = [
        max(len("Parameter"), max(len(k) for k in cls._synonyms.keys())),  # Ensure it fits "Parameter"
        max(len("Synonyms"), max(len(", ".join(v)) for v in cls._synonyms.values()))  # Ensure it fits "Synonyms"
    ]
    syn_separator = "+-" + "-+-".join("-" * w for w in syn_col_widths) + "-+"
    print(syn_separator)
    print("| " + " | ".join(h.ljust(syn_col_widths[i]) for i, h in enumerate(syn_headers)) + " |")
    print(syn_separator)
    for param, synonyms in cls._synonyms.items():
        print(f"| {param.ljust(syn_col_widths[0])} | {', '.join(synonyms).ljust(syn_col_widths[1])} |")
    print(syn_separator)
def resolvename(param_value, param_key, **unresolved)

Resolves the correct parameter value using known synonyms.

  • If param_value is already set (not None), return it.
  • If a synonym exists in **unresolved, assign its value.
  • If multiple synonyms of the same parameter appear in **unresolved, raise an error.
  • Otherwise, return None.

Parameters: - param_name (any): The original value (if provided). - param_key (str): The legitimate parameter name we are resolving. - unresolved (dict): The dictionary of unrecognized keyword arguments.

Returns: - The resolved value or None if not found.

Expand source code
@classmethod
def resolvename(cls, param_value, param_key, **unresolved):
    """
    Resolves the correct parameter value using known synonyms.

    - If param_value is already set (not None), return it.
    - If a synonym exists in **unresolved, assign its value.
    - If multiple synonyms of the same parameter appear in **unresolved, raise an error.
    - Otherwise, return None.

    Parameters:
    - `param_name` (any): The original value (if provided).
    - `param_key` (str): The legitimate parameter name we are resolving.
    - `unresolved` (dict): The dictionary of unrecognized keyword arguments.

    Returns:
    - The resolved value or None if not found.
    """
    if param_value is not None:
        return param_value  # The parameter is explicitly defined, do not override
    if not unresolved:      # shortcut
        return None
    resolved_value = None
    found_keys = []
    # Check if param_key itself is present in unresolved
    if param_key in unresolved:
        found_keys.append(param_key)
        resolved_value = unresolved[param_key]
    # Check if any of its synonyms are in unresolved
    if param_key in cls._synonyms:
        for synonym in cls._synonyms[param_key]:
            if synonym in unresolved:
                found_keys.append(synonym)
                resolved_value = unresolved[synonym]
    # Raise error if multiple synonyms were found
    if len(found_keys) > 1:
        raise ValueError(
            f"Conflicting definitions: Multiple synonyms {found_keys} were provided for '{param_key}'."
        )
    return resolved_value

Instance variables

var C0
Expand source code
@property
def C0(self): return self._C0 if not self.hasC0link else self.COlink.getfull(self._C0)

Getter for C0link

Expand source code
@property
def C0link(self):
    """Getter for C0link"""
    return self._C0link
var Cunit
Expand source code
@property
def Cunit(self): return self._Cunit
var D
Expand source code
@property
def D(self):
    Dtmp = None
    if self.Dmodel == "default": # default behavior
        Dtmp = self._compute_Dmodel()
    elif callable(self.Dmodel): # user override
        Dtmp = self.Dmodel()
    if Dtmp is not None:
        Dtmp = np.full_like(self._D, Dtmp,dtype=np.float64)
        if self.hasDlink:
            return self.Dlink.getfull(Dtmp) # substitution rules are applied as defined in Dlink
        else:
            return Dtmp
    return self._D if not self.hasDlink else self.Dlink.getfull(self._D)

Getter for Dlink

Expand source code
@property
def Dlink(self):
    """Getter for Dlink"""
    return self._Dlink
var Dmodel
Expand source code
@property
def Dmodel(self):
    return self._Dmodel
var Dunit
Expand source code
@property
def Dunit(self): return self._Dunit
var Foscale
Expand source code
@property
def Foscale(self): return self.D[self.referencelayer]/self.lreferencelayer**2
var T
Expand source code
@property
def T(self): return self._T if not self.hasTlink else self.Tlink.getfull(self._T)
var TK
Expand source code
@property
def TK(self): return self._T+T0K
var TKunit
Expand source code
@property
def TKunit(self): return "K"

Getter for Tlink

Expand source code
@property
def Tlink(self):
    """Getter for Tlink"""
    return self._Tlink
var Tunit
Expand source code
@property
def Tunit(self): return self._Tunit
var chemical
Expand source code
@property
def chemical(self): return self.substance # alias/synonym of substance
var chemicalclass
Expand source code
@property
def chemicalclass(self): return self._chemicalclass
var chemicalsubstance
Expand source code
@property
def chemicalsubstance(self): return self._chemicalsubstance
var chemicalsubstance_history
Expand source code
@property
def chemicalsubstance_history(self):
    return self._chemicalsubstance_history if self._chemicalsubstance_history != [] else [self.chemicalsubstance]
var code
Expand source code
@property
def code(self): return self._code
var concentration
Expand source code
@property
def concentration(self): return sum(self.l*self.C0)/self.thickness

Returns True if C0link is defined

Expand source code
@property
def hasC0link(self):
    """Returns True if C0link is defined"""
    return self.C0link is not None

Returns True if Dlink is defined

Expand source code
@property
def hasDlink(self):
    """Returns True if Dlink is defined"""
    return self.Dlink is not None
var hasDmodel

Returns True if a Dmodel has been defined

Expand source code
@property
def hasDmodel(self):
    """Returns True if a Dmodel has been defined"""
    if hasattr(self, "_compute_Dmodel"):
        if self._compute_Dmodel() is not None:
            return True
        elif callable(self.Dmodel):
            return self.Dmodel() is not None
    return False

Returns True if Tlink is defined

Expand source code
@property
def hasTlink(self):
    """Returns True if Tlink is defined"""
    return self.Tlink is not None
var hashlayer

hash layer (layer-by-layer) method

Expand source code
@property
def hashlayer(self):
    """ hash layer (layer-by-layer) method """
    return [hash((self._name[n],
                  self._type[n],
                  self._material[n],
                  self._l[n],
                  self._D[n],
                  self.k[n],
                  self._C0[n],
                  self._rho[n]))
            for n in range(self._nlayer)
            ]

Returns True if klink is defined

Expand source code
@property
def hasklink(self):
    """Returns True if klink is defined"""
    return self.klink is not None
var haskmodel

Returns True if a kmodel has been defined

Expand source code
@property
def haskmodel(self):
    """Returns True if a kmodel has been defined"""
    if hasattr(self, "_compute_kmodel"):
        if self._compute_kmodel() is not None:
            return True
        elif callable(self.kmodel):
            return self.kmodel() is not None
    return False

Returns True if llink is defined

Expand source code
@property
def hasllink(self):
    """Returns True if llink is defined"""
    return self.llink is not None
var ispolymer
Expand source code
@property
def ispolymer(self): return self.chemicalclass == "polymer"
var ispolymer_history
Expand source code
@property
def ispolymer_history(self):
    return self._ispolymer_history if self._ispolymer_history != [] else [self.ispolymer]
var issolid
Expand source code
@property
def issolid(self): return self.physicalstate == "solid"
var k
Expand source code
@property
def k(self):
    ktmp = None
    if self.kmodel == "default": # default behavior
        ktmp = self._compute_kmodel()
    elif callable(self.kmodel): # user override
        ktmp = self.kmodel()
    if ktmp is not None:
        ktmp = np.full_like(self._k, ktmp,dtype=np.float64)
        if self.hasklink:
            return self.klink.getfull(ktmp) # substitution rules are applied as defined in klink
        else:
            return ktmp
    return self._k if not self.hasklink else self.klink.getfull(self._k)

Getter for klink

Expand source code
@property
def klink(self):
    """Getter for klink"""
    return self._klink
var kmodel
Expand source code
@property
def kmodel(self):
    return self._kmodel
var kunit
Expand source code
@property
def kunit(self): return self._kunit
var l
Expand source code
@property
def l(self): return self._l if not self.hasllink else self.llink.getfull(self._l)
var lag
Expand source code
@property
def lag(self): return self.l**2/(6*self.D)
var layerclass
Expand source code
@property
def layerclass(self): return type(self).__name__
var layerclass_history
Expand source code
@property
def layerclass_history(self):
    return self._layerclass_history if self._layerclass_history != [] else [self.layerclass]

Getter for llink

Expand source code
@property
def llink(self):
    """Getter for llink"""
    return self._llink
var lreferencelayer
Expand source code
@property
def lreferencelayer(self): return self.l[self.referencelayer]
var lunit
Expand source code
@property
def lunit(self): return self._lunit
var material
Expand source code
@property
def material(self): return self._material
var medium
Expand source code
@property
def medium(self): return self._medium
var migrant
Expand source code
@property
def migrant(self): return self.substance # alias/synonym of substance
var n
Expand source code
@property
def n(self): return self._nlayer
var name
Expand source code
@property
def name(self): return self._name
var nmesh
Expand source code
@property
def nmesh(self): return self._nmesh
var nmeshmin
Expand source code
@property
def nmeshmin(self): return self._nmeshmin
var permeability
Expand source code
@property
def permeability(self): return self.D/(self.l*self.k)
var physicalstate
Expand source code
@property
def physicalstate(self): return self._physicalstate
var polarityindex
Expand source code
@property
def polarityindex(self):
    # rescaled to match predictions - standard scale [0,10.2] - predicted scale [0,7.12]
    return self._polarityindex * migrant("water").polarityindex/10.2
var pressure
Expand source code
@property
def pressure(self): return self.k*self.C0
var rank
Expand source code
@property
def rank(self): return (self.n-np.argsort(np.array(self.resistance))).tolist()
var referencelayer
Expand source code
@property
def referencelayer(self): return np.argmax(self.resistance)
var relative_resistance
Expand source code
@property
def relative_resistance(self): return self.resistance/sum(self.resistance)
var relative_thickness
Expand source code
@property
def relative_thickness(self): return self.l/self.thickness
var resistance
Expand source code
@property
def resistance(self): return self.l*self.k/self.D
var rho
Expand source code
@property
def rho(self): return self._rho
var rhounit
Expand source code
@property
def rhounit(self): return self._rhounit
var solute
Expand source code
@property
def solute(self): return self.substance # alias/synonym of substance
var substance
Expand source code
@property
def substance(self): return self._substance
var thickness
Expand source code
@property
def thickness(self): return sum(self.l)
var type
Expand source code
@property
def type(self): return self._type

Methods

def C0latex(self, numdigits=4, units='a.u.', prefix='C0=', mathmode='$')

Returns Initial Concentratoin values (C0) formatted in LaTeX scientific notation.

Expand source code
def C0latex(self, numdigits=4, units="a.u.",prefix="C0=",mathmode="$"):
    """Returns Initial Concentratoin values (C0) formatted in LaTeX scientific notation."""
    return [format_scientific_latex(c, numdigits, units, prefix,mathmode) for c in self.C0]
def Dlatex(self, numdigits=4, units='\\mathrm{m^2 \\cdot s^{-1}}', prefix='D=', mathmode='$')

Returns diffusivity values (D) formatted in LaTeX scientific notation.

Expand source code
def Dlatex(self, numdigits=4, units=r"\mathrm{m^2 \cdot s^{-1}}",prefix="D=",mathmode="$"):
    """Returns diffusivity values (D) formatted in LaTeX scientific notation."""
    return [format_scientific_latex(D, numdigits, units, prefix,mathmode) for D in self.D]
def acknowledge(self, what=None, category=None)

Register inherited properties under a given category.

Parameters:

what : str or list of str or a set The properties or attributes that have been inherited. category : str The category under which the properties are grouped.

Expand source code
def acknowledge(self, what=None, category=None):
    """
    Register inherited properties under a given category.

    Parameters:
    -----------
    what : str or list of str or a set
        The properties or attributes that have been inherited.
    category : str
        The category under which the properties are grouped.
    """
    if category is None or what is None:
        raise ValueError("Both 'what' and 'category' must be provided.")
    if isinstance(what, str):
        what = {what}  # Convert string to a set
    elif isinstance(what, list):
        what = set(what)  # Convert list to a set for uniqueness
    elif not isinstance(what,set):
        raise TypeError("'what' must be a string, a list, or a set of strings.")
    if category not in self._hasbeeninherited:
        self._hasbeeninherited[category] = set()
    self._hasbeeninherited[category].update(what)
def checknumvalue(self, value, ExpectedUnits=None)

returns a validate value to set properties

Expand source code
def checknumvalue(self,value,ExpectedUnits=None):
    """ returns a validate value to set properties """
    if isinstance(value,tuple):
        value = check_units(value,ExpectedUnits=ExpectedUnits)
    if isinstance(value,int): value = float(value)
    if isinstance(value,float): value = np.array([value])
    if isinstance(value,list): value = np.array(value)
    if len(value)>self._nlayer:
        value = value[:self._nlayer]
        if self.verbosity>1 and self.verbose:
            print('dimension mismatch, the extra value(s) has been removed')
    elif len(value)<self._nlayer:
        value = np.concatenate((value,value[-1:]*np.ones(self._nlayer-len(value))))
        if self.verbosity>1 and self.verbose:
            print('dimension mismatch, the last value has been repeated')
    return value
def checktextvalue(self, value)

returns a validate value to set properties

Expand source code
def checktextvalue(self,value):
    """ returns a validate value to set properties """
    if not isinstance(value,list): value = [value]
    if len(value)>self._nlayer:
        value = value[:self._nlayer]
        if self.verbosity>1 and self.verbose:
            print('dimension mismatch, the extra entry(ies) has been removed')
    elif len(value)<self._nlayer:
        value = value + value[-1:]*(self._nlayer-len(value))
        if self.verbosity>1 and self.verbose:
            print('dimension mismatch, the last entry has been repeated')
    return value
def contact(self, medium, **kwargs)

alias to migration method

Expand source code
def contact(self,medium,**kwargs):
    """alias to migration method"""
    return self.migration(medium,**kwargs)
def copy(self, **kwargs)

Creates a deep copy of the current layer instance.

Returns: - layer: A new layer instance identical to the original.

Expand source code
def copy(self,**kwargs):
    """
    Creates a deep copy of the current layer instance.

    Returns:
    - layer: A new layer instance identical to the original.
    """
    return duplicate(self).update(**kwargs)
def klatex(self, numdigits=4, units='a.u.', prefix='k=', mathmode='$')

Returns Henry-like values (k) formatted in LaTeX scientific notation.

Expand source code
def klatex(self, numdigits=4, units="a.u.",prefix="k=",mathmode="$"):
    """Returns Henry-like values (k) formatted in LaTeX scientific notation."""
    return [format_scientific_latex(k, numdigits, units, prefix,mathmode) for k in self.k]
def llatex(self, numdigits=4, units='m', prefix='l=', mathmode='$')

Returns thickness values (k) formatted in LaTeX scientific notation.

Expand source code
def llatex(self, numdigits=4, units="m",prefix="l=",mathmode="$"):
    """Returns thickness values (k) formatted in LaTeX scientific notation."""
    return [format_scientific_latex(l, numdigits, units, prefix,mathmode) for l in self.l]
def mesh(self, nmesh=None, nmeshmin=None)

nmesh() generates mesh based on nmesh and nmeshmin, nmesh(nmesh=value,nmeshmin=value)

Expand source code
def mesh(self,nmesh=None,nmeshmin=None):
    """ nmesh() generates mesh based on nmesh and nmeshmin, nmesh(nmesh=value,nmeshmin=value) """
    if nmesh==None: nmesh = self.nmesh
    if nmeshmin==None: nmeshmin = self.nmeshmin
    if nmeshmin>nmesh: nmeshmin,nmesh = nmesh, nmeshmin
    # X = mesh distribution (number of nodes per layer)
    X = np.ones(self._nlayer)
    for i in range(1,self._nlayer):
       X[i] = X[i-1]*(self.permeability[i-1]*self.l[i])/(self.permeability[i]*self.l[i-1])
    X = np.maximum(nmeshmin,np.ceil(nmesh*X/sum(X)))
    X = np.round((X/sum(X))*nmesh).astype(int)
    # do the mesh
    x0 = 0
    mymesh = []
    for i in range(self._nlayer):
        mymesh.append(mesh(self.l[i]/self.l[self.referencelayer],X[i],x0=x0,index=i))
        x0 += self.l[i]
    return mymesh
def migration(self, medium=None, **kwargs)

interface to simulation engine: senspantankar

Expand source code
def migration(self,medium=None,**kwargs):
    """interface to simulation engine: senspantankar"""
    from patankar.food import foodphysics
    from patankar.migration import senspatankar
    if medium is None:
        medium = self.medium
    if not isinstance(medium,foodphysics):
        raise TypeError(f"medium must be a foodphysics not a {type(medium).__name__}")
    sim = senspatankar(self,medium,**kwargs)
    medium.lastsimulation = sim # store the last simulation result in medium
    medium.lastinput = self # store the last input (self)
    sim.savestate(self,medium) # store store the inputs in sim for chaining
    return sim
def simplify(self)

merge continuous layers of the same type

Expand source code
def simplify(self):
    """ merge continuous layers of the same type """
    nlayer = self._nlayer
    if nlayer>1:
       res = self[0]
       ires = 0
       ireshash = res.hashlayer[0]
       for i in range(1,nlayer):
           if self.hashlayer[i]==ireshash:
               res.l[ires] = res.l[ires]+self.l[i]
           else:
               res = res + self[i]
               ires = ires+1
               ireshash = self.hashlayer[i]
    else:
         res = self.copy()
    return res
def split(self)

split layers

Expand source code
def split(self):
    """ split layers """
    out = ()
    if self._nlayer>0:
        for i in range(self._nlayer):
            out = out + (self[i],) # (,) special syntax for tuple singleton
    return out
def struct(self)

returns the equivalent dictionary from an object

Expand source code
def struct(self):
    """ returns the equivalent dictionary from an object """
    return dict((key, getattr(self, key)) for key in dir(self) if key not in dir(self.__class__))
def update(self, **kwargs)

Update layer parameters following strict validation rules.

Rules: 1) key should be listed in self._defaults 2) for some keys, synonyms are acceptable as reported in self._synonyms 3) values cannot be None if they were not None in _defaults 4) values should be str if they were initially str, idem with bool 5) values which were numeric (int, float, np.ndarray) should remain numeric. 6) lists are acceptable as numeric arrays 7) all numerical (float, np.ndarray, list) except int must be converted into numpy arrays. Values which were int in _defaults must remain int and an error should be raised if a float value is proposed. 8) keys listed in _parametersWithUnits can be assigned with tuples (value, "unit"). They will be converted automatically with check_units(value). 9) for parameters with a default value None, any value is acceptable 10) A clear error message should be displayed for any bad value showing the current value of the parameter and its default value.

Expand source code
def update(self, **kwargs):
    """
    Update layer parameters following strict validation rules.

    Rules:
    1) key should be listed in self._defaults
    2) for some keys, synonyms are acceptable as reported in self._synonyms
    3) values cannot be None if they were not None in _defaults
    4) values should be str if they were initially str, idem with bool
    5) values which were numeric (int, float, np.ndarray) should remain numeric.
    6) lists are acceptable as numeric arrays
    7) all numerical (float, np.ndarray, list) except int must be converted into numpy arrays.
       Values which were int in _defaults must remain int and an error should be raised
       if a float value is proposed.
    8) keys listed in _parametersWithUnits can be assigned with tuples (value, "unit").
       They will be converted automatically with check_units(value).
    9) for parameters with a default value None, any value is acceptable
    10) A clear error message should be displayed for any bad value showing the
        current value of the parameter and its default value.
    """

    if not kwargs:  # shortcut
        return self # for chaining

    param_counts = {key: 0 for key in self._defaults}  # Track how many times each param is set

    def resolve_key(key):
        """Resolve key considering synonyms and check for duplicates."""
        for main_key, synonyms in self._synonyms.items():
            if key == main_key or key in synonyms:
                param_counts[main_key] += 1
                return main_key
        param_counts[key] += 1
        return key

    def validate_value(key, value):
        """Validate and process the value according to the rules."""
        default_value = self._defaults[key]

        # Rule 3: values cannot be None if they were not None in _defaults
        if value is None and default_value is not None:
            raise ValueError(f"Invalid value for '{key}': None is not allowed. "
                             f"Current: {getattr(self, key)}, Default: {default_value}")

        # Rule 9: If default is None, any value is acceptable
        if default_value is None:
            return value

        # Rule 4 & 5: Ensure type consistency (str, bool, or numeric types)
        if isinstance(default_value, str) and not isinstance(value, str):
            raise TypeError(f"Invalid type for '{key}': Expected str, got {type(value).__name__}. "
                            f"Current: {getattr(self, key)}, Default: {default_value}")
        if isinstance(default_value, bool) and not isinstance(value, bool):
            raise TypeError(f"Invalid type for '{key}': Expected bool, got {type(value).__name__}. "
                            f"Current: {getattr(self, key)}, Default: {default_value}")

        # Rule 6 & 7: Convert numeric types properly
        if isinstance(default_value, (int, float, np.ndarray)):
            if isinstance(value, list):
                value = np.array(value)

            if isinstance(default_value, int):
                if isinstance(value, float) or (isinstance(value, np.ndarray) and np.issubdtype(value.dtype, np.floating)):
                    raise TypeError(f"Invalid type for '{key}': Expected integer, got float. "
                                    f"Current: {getattr(self, key)}, Default: {default_value}")
                if isinstance(value, (int, np.integer)):
                    return int(value)  # Ensure it remains an int
                raise TypeError(f"Invalid type for '{key}': Expected integer, got {type(value).__name__}. "
                                f"Current: {getattr(self, key)}, Default: {default_value}")

            if isinstance(value, (int, float, list, np.ndarray)):
                return np.array(value, dtype=float)  # Convert everything to np.array for floats

            raise TypeError(f"Invalid type for '{key}': Expected numeric, got {type(value).__name__}. "
                            f"Current: {getattr(self, key)}, Default: {default_value}")

        # Rule 8: Convert units if applicable
        if key in self._parametersWithUnits and isinstance(value, tuple):
            value, unit = value
            converted_value, _ = check_units((value, unit), ExpectedUnits=self._parametersWithUnits[key])
            return converted_value

        return value

    # Apply updates while tracking parameter occurrences
    for key, value in kwargs.items():
        resolved_key = resolve_key(key)

        if resolved_key not in self._defaults:
            raise KeyError(f"Invalid key '{key}'. Allowed keys: {list(self._defaults.keys())}.")

        try:
            validated_value = validate_value(resolved_key, value)
            setattr(self, resolved_key, validated_value)
        except (TypeError, ValueError) as e:
            raise ValueError(f"Error updating '{key}': {e}")

    # Ensure that no parameter was set multiple times due to synonyms
    duplicate_keys = [k for k, v in param_counts.items() if v > 1]
    if duplicate_keys:
        raise ValueError(f"Duplicate assignment detected for parameters: {duplicate_keys}. "
                         "Use only one synonym per parameter.")

    return self # to enable chaining

A sparse representation of properties (D, k, C0) used in layer instances.

This class allows storing and manipulating selected values of a property (<code>D</code>, <code>k</code>, or <code>C0</code>)
while keeping a sparse structure. It enables seamless interaction with <code><a title="migration.layer" href="#migration.layer">layer</a></code> objects
by overriding values dynamically and ensuring efficient memory usage.

The primary use case is to fit and control property values externally while keeping
the <code><a title="migration.layer" href="#migration.layer">layer</a></code> representation internally consistent.

Attributes
----------
property : str
    The name of the property linked (`"D"`, `"k"`, or `"C0"`).
indices : np.ndarray
    A NumPy array storing the indices of explicitly defined values.
values : np.ndarray
    A NumPy array storing the corresponding values at <code>indices</code>.
length : int
    The total length of the sparse vector, ensuring coverage of all indices.
replacement : str, optional
    Defines how missing values are handled:
    - `"repeat"`: Propagates the last known value beyond <code>length</code>.
    - `"periodic"`: Cycles through known values beyond <code>length</code>.
    - Default: No automatic replacement within <code>length</code>.

Methods
-------
set(index, value)
    Sets values at specific indices. If <code>None</code> or <code>np.nan</code> is provided, the index is removed.
get(index=None)
    Retrieves values at the given indices. Returns <code>NaN</code> for missing values.
getandreplace(indices, altvalues)
    Similar to <code>get()</code>, but replaces <code>NaN</code> values with corresponding values from <code>altvalues</code>.
getfull(altvalues)
    Returns the full vector using <code>getandreplace(None, altvalues)</code>.
lengthextension()
    Ensures <code>length</code> covers all stored indices (`max(indices) + 1`).
rename(new_property_name)
    Renames the <code>property</code> associated with this <code><a title="migration.layerLink" href="#migration.layerLink">layerLink</a></code>.
nzcount()
    Returns the number of explicitly stored (nonzero) elements.
__getitem__(index)
    Allows retrieval using <code>D\_link\[index]</code>, equivalent to <code>get(index)</code>.
__setitem__(index, value)
    Allows assignment using `D_link[index] = value`, equivalent to <code>set(index, value)</code>.
__add__(other)
    Concatenates two <code><a title="migration.layerLink" href="#migration.layerLink">layerLink</a></code> instances with the same property.
__mul__(n)
    Repeats the <code><a title="migration.layerLink" href="#migration.layerLink">layerLink</a></code> instance <code>n</code> times, shifting indices accordingly.

Examples
--------
Create a <code><a title="migration.layerLink" href="#migration.layerLink">layerLink</a></code> for <code>D</code> and manipulate its values:

```python
D_link = layerLink("D")
D_link.set([0, 2], [1e-14, 3e-14])
print(D_link.get())  # Expected: array([1e-14, nan, 3e-14])

D_link[1] = 2e-14
print(D_link.get())  # Expected: array([1e-14, 2e-14, 3e-14])
```

Concatenating two <code><a title="migration.layerLink" href="#migration.layerLink">layerLink</a></code> instances:

```python
A = layerLink("D")
A.set([0, 2], [1e-14, 3e-14])

B = layerLink("D")
B.set([1, 3], [2e-14, 4e-14])

C = A + B  # Concatenates while shifting indices
print(C.get())  # Expected: array([1e-14, 3e-14, nan, nan, 2e-14, 4e-14])
```

Handling missing values with <code>getandreplace()</code>:

```python
alt_values = np.array([5e-14, 6e-14, 7e-14, 8e-14])
print(D_link.getandreplace([0, 1, 2, 3], alt_values))
# Expected: array([1e-14, 2e-14, 3e-14, 8e-14])  # Fills NaNs from alt_values
```

Ensuring correct behavior for `*`:

```python
B = A * 3  # Repeats A three times
print(B.indices)  # Expected: [0, 2, 4, 6, 8, 10]
print(B.values)   # Expected: [1e-14, 3e-14, 1e-14, 3e-14, 1e-14, 3e-14]
print(B.length)   # Expected: 3 * A.length
```


Other Examples:
----------------

### **Creating a Link**
D_link = layerLink("D", indices=[1, 3], values=[5e-14, 7e-14], length=4)
print(D_link)  # <Link for D: 2 of 4 replacement values>

### **Retrieving Values**
print(D_link.get())       # Full vector with None in unspecified indices
print(D_link.get(1))      # Returns 5e-14
print(D_link.get([0,2]))  # Returns [None, None]

### **Setting Values**
D_link.set(2, 6e-14)
print(D_link.get())  # Now index 2 is replaced

### **Resetting with a Prototype**
prototype = [None, 5e-14, None, 7e-14, 8e-14]
D_link.reset(prototype)
print(D_link.get())  # Now follows the new structure

### **Getting and Setting Values with []**
D_link = layerLink("D", indices=[1, 3, 5], values=[5e-14, 7e-14, 6e-14], length=10)
print(D_link[3])      # ✅ Returns 7e-14
print(D_link[:5])     # ✅ Returns first 5 elements (with NaNs where undefined)
print(D_link[[1, 3]]) # ✅ Returns [5e-14, 7e-14]
D_link[2] = 9e-14     # ✅ Sets D[2] to 9e-14
D_link[0:4:2] = [1e-14, 2e-14]  # ✅ Sets D[0] = 1e-14, D[2] = 2e-14
print(len(D_link))    # ✅ Returns 10 (full vector length)

###**Practical Syntaxes**
D_link = layerLink("D")
D_link[2] = 3e-14  # ✅ single value
D_link[0] = 1e-14
print(D_link.get())
print(D_link[1])
print(repr(D_link))
D_link[:4] = 1e-16  # ✅ Fills indices 0,1,2,3 with 1e-16
print(D_link.get())  # ✅ Outputs: [1e-16, 1e-16, 1e-16, 1e-16, nan, 1e-14]
D_link[[1,2]] = None  # ✅ Fills indices 0,1,2,3 with 1e-16
print(D_link.get())  # ✅ Outputs: [1e-16, 1e-16, 1e-16, 1e-16, nan, 1e-14]
D_link[[0]] = 1e-10
print(D_link.get())

###**How it works inside layer: a short simulation**
# layerLink created by user
duser = layerLink()
duser.getfull([1e-15,2e-15,3e-15])
duser[0] = 1e-10
duser.getfull([1e-15,2e-15,3e-15])
duser[1]=1e-9
duser.getfull([1e-15,2e-15,3e-15])
# layerLink used internally
dalias=duser
dalias[1]=2e-11
duser.getfull([1e-15,2e-15,3e-15,4e-15])
dalias[1]=2.1e-11
duser.getfull([1e-15,2e-15,3e-15,4e-15])

###**Combining layerLinks instances**
A = layerLink("D")
A.set([0, 2], [1e-11, 3e-11])  # length=3
B = layerLink("D")
B.set([1, 3], [2e-14, 4e-12])  # length=4
C = A + B
print(C.indices)  # Expected: [0, 2, 4, 6]
print(C.values)   # Expected: [1.e-11 3.e-11 2.e-14 4.e-12]
print(C.length)   # Expected: 3 + 4 = 7


TEST CASES:
-----------

print("🔹 Test 1: Initialize empty layerLink")
D_link = layerLink("D")
print(D_link.get())  # Expected: array([]) or array([nan, nan, nan]) if length is pre-set
print(repr(D_link))  # Expected: No indices set

print("

🔹 Test 2: Assigning values at specific indices") D_link[2] = 3e-14 D_link[0] = 1e-14 print(D_link.get()) # Expected: array([1.e-14, nan, 3.e-14]) print(D_link[1]) # Expected: nan

print("

🔹 Test 3: Assign multiple values at once") D_link[[1, 4]] = [2e-14, 5e-14] print(D_link.get()) # Expected: array([1.e-14, 2.e-14, 3.e-14, nan, 5.e-14])

print("

🔹 Test 4: Remove a single index") D_link[1] = None print(D_link.get()) # Expected: array([1.e-14, nan, 3.e-14, nan, 5.e-14])

print("

🔹 Test 5: Remove multiple indices at once") D_link[[0, 2]] = None print(D_link.get()) # Expected: array([nan, nan, nan, nan, 5.e-14])

print("

🔹 Test 6: Removing indices using a slice") D_link[3:5] = None print(D_link.get()) # Expected: array([nan, nan, nan, nan, nan])

print("

🔹 Test 7: Assign new values after removals") D_link[1] = 7e-14 D_link[3] = 8e-14 print(D_link.get()) # Expected: array([nan, 7.e-14, nan, 8.e-14, nan])

print("

🔹 Test 8: Check periodic replacement") D_link = layerLink("D", replacement="periodic") D_link[2] = 3e-14 D_link[0] = 1e-14 print(D_link[5]) # Expected: 1e-14 (since 5 mod 2 = 0)

print("

🔹 Test 9: Check repeat replacement") D_link = layerLink("D", replacement="repeat") D_link[2] = 3e-14 D_link[0] = 1e-14 print(D_link.get()) # Expected: array([1.e-14, nan, 3.e-14]) print(D_link[3]) # Expected: 3e-14 (repeat last known value)

print("

🔹 Test 10: Resetting with a prototype") D_link.reset([None, 5e-14, None, 7e-14]) print(D_link.get()) # Expected: array([nan, 5.e-14, nan, 7.e-14])

print("

🔹 Test 11: Edge case - Assigning nan explicitly") D_link[1] = np.nan print(D_link.get()) # Expected: array([nan, nan, nan, 7.e-14])

print("

🔹 Test 12: Assigning a range with a scalar value (broadcasting)") D_link[0:3] = 9e-14 print(D_link.get()) # Expected: array([9.e-14, 9.e-14, 9.e-14, 7.e-14])

print("

🔹 Test 13: Assigning a slice with a list of values") D_link[1:4] = [6e-14, 5e-14, 4e-14] print(D_link.get()) # Expected: array([9.e-14, 6.e-14, 5.e-14, 4.e-14])

print("

🔹 Test 14: Length updates correctly after removals") D_link[[1, 2]] = None print(len(D_link)) # Expected: 4 (since max index is 3)

print("

🔹 Test 15: Setting index beyond length auto-extends") D_link[6] = 2e-14 print(len(D_link)) # Expected: 7 (since max index is 6) print(D_link.get()) # Expected: array([9.e-14, nan, nan, 4.e-14, nan, nan, 2.e-14])

constructs a link

Expand source code
class layerLink:
    """
    A sparse representation of properties (`D`, `k`, `C0`) used in `layer` instances.

    This class allows storing and manipulating selected values of a property (`D`, `k`, or `C0`)
    while keeping a sparse structure. It enables seamless interaction with `layer` objects
    by overriding values dynamically and ensuring efficient memory usage.

    The primary use case is to fit and control property values externally while keeping
    the `layer` representation internally consistent.

    Attributes
    ----------
    property : str
        The name of the property linked (`"D"`, `"k"`, or `"C0"`).
    indices : np.ndarray
        A NumPy array storing the indices of explicitly defined values.
    values : np.ndarray
        A NumPy array storing the corresponding values at `indices`.
    length : int
        The total length of the sparse vector, ensuring coverage of all indices.
    replacement : str, optional
        Defines how missing values are handled:
        - `"repeat"`: Propagates the last known value beyond `length`.
        - `"periodic"`: Cycles through known values beyond `length`.
        - Default: No automatic replacement within `length`.

    Methods
    -------
    set(index, value)
        Sets values at specific indices. If `None` or `np.nan` is provided, the index is removed.
    get(index=None)
        Retrieves values at the given indices. Returns `NaN` for missing values.
    getandreplace(indices, altvalues)
        Similar to `get()`, but replaces `NaN` values with corresponding values from `altvalues`.
    getfull(altvalues)
        Returns the full vector using `getandreplace(None, altvalues)`.
    lengthextension()
        Ensures `length` covers all stored indices (`max(indices) + 1`).
    rename(new_property_name)
        Renames the `property` associated with this `layerLink`.
    nzcount()
        Returns the number of explicitly stored (nonzero) elements.
    __getitem__(index)
        Allows retrieval using `D_link[index]`, equivalent to `get(index)`.
    __setitem__(index, value)
        Allows assignment using `D_link[index] = value`, equivalent to `set(index, value)`.
    __add__(other)
        Concatenates two `layerLink` instances with the same property.
    __mul__(n)
        Repeats the `layerLink` instance `n` times, shifting indices accordingly.

    Examples
    --------
    Create a `layerLink` for `D` and manipulate its values:

    ```python
    D_link = layerLink("D")
    D_link.set([0, 2], [1e-14, 3e-14])
    print(D_link.get())  # Expected: array([1e-14, nan, 3e-14])

    D_link[1] = 2e-14
    print(D_link.get())  # Expected: array([1e-14, 2e-14, 3e-14])
    ```

    Concatenating two `layerLink` instances:

    ```python
    A = layerLink("D")
    A.set([0, 2], [1e-14, 3e-14])

    B = layerLink("D")
    B.set([1, 3], [2e-14, 4e-14])

    C = A + B  # Concatenates while shifting indices
    print(C.get())  # Expected: array([1e-14, 3e-14, nan, nan, 2e-14, 4e-14])
    ```

    Handling missing values with `getandreplace()`:

    ```python
    alt_values = np.array([5e-14, 6e-14, 7e-14, 8e-14])
    print(D_link.getandreplace([0, 1, 2, 3], alt_values))
    # Expected: array([1e-14, 2e-14, 3e-14, 8e-14])  # Fills NaNs from alt_values
    ```

    Ensuring correct behavior for `*`:

    ```python
    B = A * 3  # Repeats A three times
    print(B.indices)  # Expected: [0, 2, 4, 6, 8, 10]
    print(B.values)   # Expected: [1e-14, 3e-14, 1e-14, 3e-14, 1e-14, 3e-14]
    print(B.length)   # Expected: 3 * A.length
    ```


    Other Examples:
    ----------------

    ### **Creating a Link**
    D_link = layerLink("D", indices=[1, 3], values=[5e-14, 7e-14], length=4)
    print(D_link)  # <Link for D: 2 of 4 replacement values>

    ### **Retrieving Values**
    print(D_link.get())       # Full vector with None in unspecified indices
    print(D_link.get(1))      # Returns 5e-14
    print(D_link.get([0,2]))  # Returns [None, None]

    ### **Setting Values**
    D_link.set(2, 6e-14)
    print(D_link.get())  # Now index 2 is replaced

    ### **Resetting with a Prototype**
    prototype = [None, 5e-14, None, 7e-14, 8e-14]
    D_link.reset(prototype)
    print(D_link.get())  # Now follows the new structure

    ### **Getting and Setting Values with []**
    D_link = layerLink("D", indices=[1, 3, 5], values=[5e-14, 7e-14, 6e-14], length=10)
    print(D_link[3])      # ✅ Returns 7e-14
    print(D_link[:5])     # ✅ Returns first 5 elements (with NaNs where undefined)
    print(D_link[[1, 3]]) # ✅ Returns [5e-14, 7e-14]
    D_link[2] = 9e-14     # ✅ Sets D[2] to 9e-14
    D_link[0:4:2] = [1e-14, 2e-14]  # ✅ Sets D[0] = 1e-14, D[2] = 2e-14
    print(len(D_link))    # ✅ Returns 10 (full vector length)

    ###**Practical Syntaxes**
    D_link = layerLink("D")
    D_link[2] = 3e-14  # ✅ single value
    D_link[0] = 1e-14
    print(D_link.get())
    print(D_link[1])
    print(repr(D_link))
    D_link[:4] = 1e-16  # ✅ Fills indices 0,1,2,3 with 1e-16
    print(D_link.get())  # ✅ Outputs: [1e-16, 1e-16, 1e-16, 1e-16, nan, 1e-14]
    D_link[[1,2]] = None  # ✅ Fills indices 0,1,2,3 with 1e-16
    print(D_link.get())  # ✅ Outputs: [1e-16, 1e-16, 1e-16, 1e-16, nan, 1e-14]
    D_link[[0]] = 1e-10
    print(D_link.get())

    ###**How it works inside layer: a short simulation**
    # layerLink created by user
    duser = layerLink()
    duser.getfull([1e-15,2e-15,3e-15])
    duser[0] = 1e-10
    duser.getfull([1e-15,2e-15,3e-15])
    duser[1]=1e-9
    duser.getfull([1e-15,2e-15,3e-15])
    # layerLink used internally
    dalias=duser
    dalias[1]=2e-11
    duser.getfull([1e-15,2e-15,3e-15,4e-15])
    dalias[1]=2.1e-11
    duser.getfull([1e-15,2e-15,3e-15,4e-15])

    ###**Combining layerLinks instances**
    A = layerLink("D")
    A.set([0, 2], [1e-11, 3e-11])  # length=3
    B = layerLink("D")
    B.set([1, 3], [2e-14, 4e-12])  # length=4
    C = A + B
    print(C.indices)  # Expected: [0, 2, 4, 6]
    print(C.values)   # Expected: [1.e-11 3.e-11 2.e-14 4.e-12]
    print(C.length)   # Expected: 3 + 4 = 7


    TEST CASES:
    -----------

    print("🔹 Test 1: Initialize empty layerLink")
    D_link = layerLink("D")
    print(D_link.get())  # Expected: array([]) or array([nan, nan, nan]) if length is pre-set
    print(repr(D_link))  # Expected: No indices set

    print("\n🔹 Test 2: Assigning values at specific indices")
    D_link[2] = 3e-14
    D_link[0] = 1e-14
    print(D_link.get())  # Expected: array([1.e-14, nan, 3.e-14])
    print(D_link[1])     # Expected: nan

    print("\n🔹 Test 3: Assign multiple values at once")
    D_link[[1, 4]] = [2e-14, 5e-14]
    print(D_link.get())  # Expected: array([1.e-14, 2.e-14, 3.e-14, nan, 5.e-14])

    print("\n🔹 Test 4: Remove a single index")
    D_link[1] = None
    print(D_link.get())  # Expected: array([1.e-14, nan, 3.e-14, nan, 5.e-14])

    print("\n🔹 Test 5: Remove multiple indices at once")
    D_link[[0, 2]] = None
    print(D_link.get())  # Expected: array([nan, nan, nan, nan, 5.e-14])

    print("\n🔹 Test 6: Removing indices using a slice")
    D_link[3:5] = None
    print(D_link.get())  # Expected: array([nan, nan, nan, nan, nan])

    print("\n🔹 Test 7: Assign new values after removals")
    D_link[1] = 7e-14
    D_link[3] = 8e-14
    print(D_link.get())  # Expected: array([nan, 7.e-14, nan, 8.e-14, nan])

    print("\n🔹 Test 8: Check periodic replacement")
    D_link = layerLink("D", replacement="periodic")
    D_link[2] = 3e-14
    D_link[0] = 1e-14
    print(D_link[5])  # Expected: 1e-14 (since 5 mod 2 = 0)

    print("\n🔹 Test 9: Check repeat replacement")
    D_link = layerLink("D", replacement="repeat")
    D_link[2] = 3e-14
    D_link[0] = 1e-14
    print(D_link.get())  # Expected: array([1.e-14, nan, 3.e-14])
    print(D_link[3])     # Expected: 3e-14 (repeat last known value)

    print("\n🔹 Test 10: Resetting with a prototype")
    D_link.reset([None, 5e-14, None, 7e-14])
    print(D_link.get())  # Expected: array([nan, 5.e-14, nan, 7.e-14])

    print("\n🔹 Test 11: Edge case - Assigning nan explicitly")
    D_link[1] = np.nan
    print(D_link.get())  # Expected: array([nan, nan, nan, 7.e-14])

    print("\n🔹 Test 12: Assigning a range with a scalar value (broadcasting)")
    D_link[0:3] = 9e-14
    print(D_link.get())  # Expected: array([9.e-14, 9.e-14, 9.e-14, 7.e-14])

    print("\n🔹 Test 13: Assigning a slice with a list of values")
    D_link[1:4] = [6e-14, 5e-14, 4e-14]
    print(D_link.get())  # Expected: array([9.e-14, 6.e-14, 5.e-14, 4.e-14])

    print("\n🔹 Test 14: Length updates correctly after removals")
    D_link[[1, 2]] = None
    print(len(D_link))   # Expected: 4 (since max index is 3)

    print("\n🔹 Test 15: Setting index beyond length auto-extends")
    D_link[6] = 2e-14
    print(len(D_link))   # Expected: 7 (since max index is 6)
    print(D_link.get())  # Expected: array([9.e-14, nan, nan, 4.e-14, nan, nan, 2.e-14])

    """

    def __init__(self, property="D", indices=None, values=None, length=None,
                 replacement="repeat", dtype=np.float64, maxlength=None):
        """constructs a link"""
        self.property = property  # "D", "k", or "C0"
        self.replacement = replacement
        self.dtype = dtype
        self._maxlength = maxlength
        if isinstance(indices,(int,float)): indices = [indices]
        if isinstance(values,(int,float)): values = [values]

        if indices is None or values is None:
            self.indices = np.array([], dtype=int)
            self.values = np.array([], dtype=dtype)
        else:
            self.indices = np.array(indices, dtype=int)
            self.values = np.array(values, dtype=dtype)

        self.length = length if length is not None else (self.indices.max() + 1 if self.indices.size > 0 else 0)
        self._validate()

    def _validate(self):
        """Ensures consistency between indices and values."""
        if len(self.indices) != len(self.values):
            raise ValueError("indices and values must have the same length.")
        if self.indices.size > 0 and self.length < self.indices.max() + 1:
            raise ValueError("length must be at least max(indices) + 1.")

    def reset(self, prototypevalues):
        """
        Resets the link instance based on the prototype values.

        - Stores only non-None values.
        - Updates `indices`, `values`, and `length` accordingly.
        """
        self.indices = np.array([i for i, v in enumerate(prototypevalues) if v is not None], dtype=int)
        self.values = np.array([v for v in prototypevalues if v is not None], dtype=self.dtype)
        self.length = len(prototypevalues)  # Update the total length

    def get(self, index=None):
        """
        Retrieves values based on index or returns the full vector.

        Rules:
        - If `index=None`, returns the full vector with overridden values (no replacement applied).
        - If `index` is a scalar, returns the corresponding value, applying replacement rules if needed.
        - If `index` is an array, returns an array of the requested indices, applying replacement rules.

        Returns:
        - NumPy array with requested values.
        """
        if index is None:
            # Return the full vector WITHOUT applying any replacement
            full_vector = np.full(self.length, np.nan, dtype=self.dtype)
            full_vector[self.indices] = self.values  # Set known values
            return full_vector

        if np.isscalar(index):
            return self._get_single(index)

        # Ensure index is an array
        index = np.array(index, dtype=int)
        return np.array([self._get_single(i) for i in index], dtype=self.dtype)

    def _get_single(self, i):
        """Retrieves the value for a single index, applying rules if necessary."""
        if i in self.indices:
            return self.values[np.where(self.indices == i)[0][0]]

        if i >= self.length:  # Apply replacement *only* for indices beyond length
            if self.replacement == "periodic":
                return self.values[i % len(self.values)]
            elif self.replacement == "repeat":
                return self._get_single(self.length - 1)  # Repeat last known value

        return np.nan  # Default case for undefined in-bounds indices


    def set(self, index, value):
        """
        Sets values at specific indices.

        - If `index=None`, resets the link with `value`.
        - If `index` is a scalar, updates or inserts the value.
        - If `index` is an array, updates corresponding values.
        - If `value` is `None` or `np.nan`, removes the corresponding index.
        """
        if index is None:
            self.reset(value)
            return

        index = np.array(index, dtype=int)
        value = np.array(value, dtype=self.dtype)

        # check against _maxlength if defined
        if self._maxlength is not None:
            if np.any(index>=self._maxlength):
                raise IndexError(f"index cannot exceeds the number of layers-1 {self._maxlength-1}")

        # Handle scalars properly
        if np.isscalar(index):
            index = np.array([index])
            value = np.array([value])

        # Detect None or NaN values and remove those indices
        mask = np.isnan(value) if value.dtype.kind == 'f' else np.array([v is None for v in value])
        if np.any(mask):
            self._remove_indices(index[mask])  # Remove these indices
            index, value = index[~mask], value[~mask]  # Keep only valid values

        if index.size > 0:  # If there are remaining valid values, store them
            for i, v in zip(index, value):
                if i in self.indices:
                    self.values[np.where(self.indices == i)[0][0]] = v
                else:
                    self.indices = np.append(self.indices, i)
                    self.values = np.append(self.values, v)

        # Update length to ensure it remains valid
        if self.indices.size > 0:
            self.length = max(self.indices) + 1  # Adjust length based on max index
        else:
            self.length = 0  # Reset to 0 if empty

        self._validate()

    def _remove_indices(self, indices):
        """
        Removes indices from `self.indices` and `self.values` and updates length.
        """
        mask = np.isin(self.indices, indices, invert=True)
        self.indices = self.indices[mask]
        self.values = self.values[mask]

        # Update length after removal
        if self.indices.size > 0:
            self.length = max(self.indices) + 1  # Adjust length based on remaining max index
        else:
            self.length = 0  # Reset to 0 if no indices remain

    def reshape(self, new_length):
        """
        Reshapes the link instance to a new length.

        - If indices exceed new_length-1, they are removed with a warning.
        - If replacement operates beyond new_length-1, a warning is issued.
        """
        if new_length < self.length:
            invalid_indices = self.indices[self.indices >= new_length]
            if invalid_indices.size > 0:
                print(f"⚠️ Warning: Indices {invalid_indices.tolist()} are outside new length {new_length}. They will be removed.")
                mask = self.indices < new_length
                self.indices = self.indices[mask]
                self.values = self.values[mask]

        # Check if replacement would be applied beyond the new length
        if self.replacement == "repeat" and self.indices.size > 0 and self.length > new_length:
            print(f"⚠️ Warning: Repeat rule was defined for indices beyond {new_length-1}, but will not be used.")

        if self.replacement == "periodic" and self.indices.size > 0 and self.length > new_length:
            print(f"⚠️ Warning: Periodic rule was defined for indices beyond {new_length-1}, but will not be used.")

        self.length = new_length

    def __repr__(self):
        """Returns a detailed string representation."""
        txt = (f"Link(property='{self.property}', indices={self.indices.tolist()}, "
                f"values={self.values.tolist()}, length={self.length}, replacement='{self.replacement}')")
        print(txt)
        return(str(self))

    def __str__(self):
        """Returns a compact summary string."""
        return f"<{self.property}:{self.__class__.__name__}: {len(self.indices)}/{self.length}  values>"

    # Override `len()`
    def __len__(self):
        """Returns the length of the vector managed by the link object."""
        return self.length

    # Override `getitem` (support for indexing and slicing)
    def __getitem__(self, index):
        """
        Allows `D_link[index]` or `D_link[slice]` to retrieve values.

        - If `index` is an integer, returns a single value.
        - If `index` is a slice or list/array, returns a NumPy array of values.
        """
        if isinstance(index, slice):
            return self.get(np.arange(index.start or 0, index.stop or self.length, index.step or 1))
        return self.get(index)

    # Override `setitem` (support for indexing and slicing)
    def __setitem__(self, index, value):
        """
        Allows `D_link[index] = value` or `D_link[slice] = list/scalar`.

        - If `index` is an integer, updates or inserts a single value.
        - If `index` is a slice or list/array, updates multiple values.
        - If `value` is `None` or `np.nan`, removes the corresponding index.
        """
        if isinstance(index, slice):
            indices = np.arange(index.start or 0, index.stop or self.length, index.step or 1)

        elif isinstance(index, (list, np.ndarray)):  # Handle non-contiguous indices
            indices = np.array(index, dtype=int)

        elif np.isscalar(index):  # Single index assignment
            indices = np.array([index], dtype=int)

        else:
            raise TypeError(f"Unsupported index type: {type(index)}")

        if value is None or (isinstance(value, float) and np.isnan(value)):  # Remove these indices
            self._remove_indices(indices)
        else:
            values = np.full_like(indices, value, dtype=self.dtype) if np.isscalar(value) else np.array(value, dtype=self.dtype)
            if len(indices) != len(values):
                raise ValueError(f"Cannot assign {len(values)} values to {len(indices)} indices.")
            self.set(indices, values)

    def getandreplace(self, indices=None, altvalues=None):
        """
        Retrieves values for the given indices, replacing NaN values with corresponding values from altvalues.

        - If `indices` is None or empty, it defaults to `[0, 1, ..., self.length - 1]`
        - altvalues should be a NumPy array with the same dtype as self.values.
        - altvalues **can be longer than** self.length, but **cannot be shorter than the highest requested index**.
        - If an index is undefined (`NaN` in get()), it is replaced with altvalues[index].

        Parameters:
        ----------
        indices : list or np.ndarray (default: None)
            The indices to retrieve values for. If None, defaults to full range `[0, ..., self.length - 1]`.
        altvalues : list or np.ndarray
            Alternative values to use where `get()` returns `NaN`.

        Returns:
        -------
        np.ndarray
            A NumPy array of values, with NaNs replaced by altvalues.
        """
        if indices is None or len(indices) == 0:
            indices = np.arange(self.length)  # Default to full range

        indices = np.array(indices, dtype=int)
        altvalues = np.array(altvalues, dtype=self.dtype)

        max_requested_index = indices.max() if indices.size > 0 else 0
        if max_requested_index >= altvalues.shape[0]:  # Ensure altvalues covers all requested indices
            raise ValueError(
                f"altvalues is too short! It has length {altvalues.shape[0]}, but requested index {max_requested_index}."
            )
        # Get original values
        original_values = self.get(indices)
        # Replace NaN values with corresponding values from altvalues
        mask_nan = np.isnan(original_values)
        original_values[mask_nan] = altvalues[indices[mask_nan]]
        return original_values


    def getfull(self, altvalues):
        """
        Retrieves the full vector using `getandreplace(None, altvalues)`.

        - If `length == 0`, returns `altvalues` as a NumPy array of the correct dtype.
        - Extends `self.length` to match `altvalues` if it's shorter.
        - Supports multidimensional `altvalues` by flattening it.

        Parameters:
        ----------
        altvalues : list or np.ndarray
            Alternative values to use where `get()` returns `NaN`.

        Returns:
        -------
        np.ndarray
            Full vector with NaNs replaced by altvalues.
        """
        # Convert altvalues to a NumPy array and flatten if needed
        altvalues = np.array(altvalues, dtype=self.dtype).flatten()

        # If self has no length, return altvalues directly
        if self.length == 0:
            return altvalues

        # Extend self.length to match altvalues if needed
        if self.length < altvalues.shape[0]:
            self.length = altvalues.shape[0]

        return self.getandreplace(None, altvalues)

    @property
    def nzlength(self):
        """
        Returns the number of stored nonzero elements (i.e., indices with values).
        """
        return len(self.indices)

    def lengthextension(self):
        """
        Ensures that the length of the layerLink instance is at least `max(indices) + 1`.

        - If there are no indices, the length remains unchanged.
        - If `length` is already sufficient, nothing happens.
        - Otherwise, it extends `length` to `max(indices) + 1`.
        """
        if self.indices.size > 0:  # Only extend if there are indices
            self.length = max(self.length, max(self.indices) + 1)

    def rename(self, new_property_name):
        """
        Renames the property associated with this link.

        Parameters:
        ----------
        new_property_name : str
            The new property name.

        Raises:
        -------
        TypeError:
            If `new_property_name` is not a string.
        """
        if not isinstance(new_property_name, str):
            raise TypeError(f"Property name must be a string, got {type(new_property_name).__name__}.")
        self.property = new_property_name


    def __add__(self, other):
        """
        Concatenates two layerLink instances.

        - Only allowed if both instances have the same property.
        - Calls `lengthextension()` on both instances before summing lengths.
        - Shifts `other`'s indices by `self.length` to maintain sparsity.
        - Concatenates values and indices.

        Returns:
        -------
        layerLink
            A new concatenated layerLink instance.
        """
        if not isinstance(other, layerLink):
            raise TypeError(f"Cannot concatenate {type(self).__name__} with {type(other).__name__}")

        if self.property != other.property:
            raise ValueError(f"Cannot concatenate: properties do not match ('{self.property}' vs. '{other.property}')")

        # Ensure lengths are properly extended before computing new length
        self.lengthextension()
        other.lengthextension()

        # Create a new instance for the result
        result = layerLink(self.property)

        # Copy self's values
        result.indices = np.array(self.indices, dtype=int)
        result.values = np.array(self.values, dtype=self.dtype)

        # Adjust other’s indices and add them
        shifted_other_indices = np.array(other.indices) + self.length
        result.indices = np.concatenate([result.indices, shifted_other_indices])
        result.values = np.concatenate([result.values, np.array(other.values, dtype=self.dtype)])

        # ✅ Correct length calculation: Sum of the two lengths (assuming lengths are extended)
        result.length = self.length + other.length

        return result


    def __mul__(self, n):
        """
        Repeats the layerLink instance `n` times.

        - Uses `+` to concatenate multiple copies with shifted indices.
        - Each repetition gets indices shifted by `self.length * i`.

        Returns:
        -------
        layerLink
            A new layerLink instance with repeated data.
        """
        if not isinstance(n, int) or n <= 0:
            raise ValueError("Multiplication factor must be a positive integer")

        result = layerLink(self.property)
        for i in range(n):
            shifted_instance = layerLink(self.property)
            shifted_instance.indices = np.array(self.indices) + i * self.length
            shifted_instance.values = np.array(self.values, dtype=self.dtype)
            shifted_instance.length = self.length
            result += shifted_instance  # Use `+` to merge each repetition

        return result

Instance variables

var nzlength

Returns the number of stored nonzero elements (i.e., indices with values).

Expand source code
@property
def nzlength(self):
    """
    Returns the number of stored nonzero elements (i.e., indices with values).
    """
    return len(self.indices)

Methods

def get(self, index=None)

Retrieves values based on index or returns the full vector.

Rules: - If index=None, returns the full vector with overridden values (no replacement applied). - If index is a scalar, returns the corresponding value, applying replacement rules if needed. - If index is an array, returns an array of the requested indices, applying replacement rules.

Returns: - NumPy array with requested values.

Expand source code
def get(self, index=None):
    """
    Retrieves values based on index or returns the full vector.

    Rules:
    - If `index=None`, returns the full vector with overridden values (no replacement applied).
    - If `index` is a scalar, returns the corresponding value, applying replacement rules if needed.
    - If `index` is an array, returns an array of the requested indices, applying replacement rules.

    Returns:
    - NumPy array with requested values.
    """
    if index is None:
        # Return the full vector WITHOUT applying any replacement
        full_vector = np.full(self.length, np.nan, dtype=self.dtype)
        full_vector[self.indices] = self.values  # Set known values
        return full_vector

    if np.isscalar(index):
        return self._get_single(index)

    # Ensure index is an array
    index = np.array(index, dtype=int)
    return np.array([self._get_single(i) for i in index], dtype=self.dtype)
def getandreplace(self, indices=None, altvalues=None)

Retrieves values for the given indices, replacing NaN values with corresponding values from altvalues.

  • If indices is None or empty, it defaults to [0, 1, ..., self.length - 1]
  • altvalues should be a NumPy array with the same dtype as self.values.
  • altvalues can be longer than self.length, but cannot be shorter than the highest requested index.
  • If an index is undefined (NaN in get()), it is replaced with altvalues[index].

Parameters:

indices : list or np.ndarray (default: None) The indices to retrieve values for. If None, defaults to full range [0, ..., self.length - 1]. altvalues : list or np.ndarray Alternative values to use where get() returns NaN.

Returns:

np.ndarray A NumPy array of values, with NaNs replaced by altvalues.

Expand source code
def getandreplace(self, indices=None, altvalues=None):
    """
    Retrieves values for the given indices, replacing NaN values with corresponding values from altvalues.

    - If `indices` is None or empty, it defaults to `[0, 1, ..., self.length - 1]`
    - altvalues should be a NumPy array with the same dtype as self.values.
    - altvalues **can be longer than** self.length, but **cannot be shorter than the highest requested index**.
    - If an index is undefined (`NaN` in get()), it is replaced with altvalues[index].

    Parameters:
    ----------
    indices : list or np.ndarray (default: None)
        The indices to retrieve values for. If None, defaults to full range `[0, ..., self.length - 1]`.
    altvalues : list or np.ndarray
        Alternative values to use where `get()` returns `NaN`.

    Returns:
    -------
    np.ndarray
        A NumPy array of values, with NaNs replaced by altvalues.
    """
    if indices is None or len(indices) == 0:
        indices = np.arange(self.length)  # Default to full range

    indices = np.array(indices, dtype=int)
    altvalues = np.array(altvalues, dtype=self.dtype)

    max_requested_index = indices.max() if indices.size > 0 else 0
    if max_requested_index >= altvalues.shape[0]:  # Ensure altvalues covers all requested indices
        raise ValueError(
            f"altvalues is too short! It has length {altvalues.shape[0]}, but requested index {max_requested_index}."
        )
    # Get original values
    original_values = self.get(indices)
    # Replace NaN values with corresponding values from altvalues
    mask_nan = np.isnan(original_values)
    original_values[mask_nan] = altvalues[indices[mask_nan]]
    return original_values
def getfull(self, altvalues)

Retrieves the full vector using getandreplace(None, altvalues).

  • If length == 0, returns altvalues as a NumPy array of the correct dtype.
  • Extends self.length to match altvalues if it's shorter.
  • Supports multidimensional altvalues by flattening it.

Parameters:

altvalues : list or np.ndarray Alternative values to use where get() returns NaN.

Returns:

np.ndarray Full vector with NaNs replaced by altvalues.

Expand source code
def getfull(self, altvalues):
    """
    Retrieves the full vector using `getandreplace(None, altvalues)`.

    - If `length == 0`, returns `altvalues` as a NumPy array of the correct dtype.
    - Extends `self.length` to match `altvalues` if it's shorter.
    - Supports multidimensional `altvalues` by flattening it.

    Parameters:
    ----------
    altvalues : list or np.ndarray
        Alternative values to use where `get()` returns `NaN`.

    Returns:
    -------
    np.ndarray
        Full vector with NaNs replaced by altvalues.
    """
    # Convert altvalues to a NumPy array and flatten if needed
    altvalues = np.array(altvalues, dtype=self.dtype).flatten()

    # If self has no length, return altvalues directly
    if self.length == 0:
        return altvalues

    # Extend self.length to match altvalues if needed
    if self.length < altvalues.shape[0]:
        self.length = altvalues.shape[0]

    return self.getandreplace(None, altvalues)
def lengthextension(self)

Ensures that the length of the layerLink instance is at least max(indices) + 1.

  • If there are no indices, the length remains unchanged.
  • If length is already sufficient, nothing happens.
  • Otherwise, it extends length to max(indices) + 1.
Expand source code
def lengthextension(self):
    """
    Ensures that the length of the layerLink instance is at least `max(indices) + 1`.

    - If there are no indices, the length remains unchanged.
    - If `length` is already sufficient, nothing happens.
    - Otherwise, it extends `length` to `max(indices) + 1`.
    """
    if self.indices.size > 0:  # Only extend if there are indices
        self.length = max(self.length, max(self.indices) + 1)
def rename(self, new_property_name)

Renames the property associated with this link.

Parameters:

new_property_name : str The new property name.

Raises:

Typeerror

If new_property_name is not a string.

Expand source code
def rename(self, new_property_name):
    """
    Renames the property associated with this link.

    Parameters:
    ----------
    new_property_name : str
        The new property name.

    Raises:
    -------
    TypeError:
        If `new_property_name` is not a string.
    """
    if not isinstance(new_property_name, str):
        raise TypeError(f"Property name must be a string, got {type(new_property_name).__name__}.")
    self.property = new_property_name
def reset(self, prototypevalues)

Resets the link instance based on the prototype values.

  • Stores only non-None values.
  • Updates indices, values, and length accordingly.
Expand source code
def reset(self, prototypevalues):
    """
    Resets the link instance based on the prototype values.

    - Stores only non-None values.
    - Updates `indices`, `values`, and `length` accordingly.
    """
    self.indices = np.array([i for i, v in enumerate(prototypevalues) if v is not None], dtype=int)
    self.values = np.array([v for v in prototypevalues if v is not None], dtype=self.dtype)
    self.length = len(prototypevalues)  # Update the total length
def reshape(self, new_length)

Reshapes the link instance to a new length.

  • If indices exceed new_length-1, they are removed with a warning.
  • If replacement operates beyond new_length-1, a warning is issued.
Expand source code
def reshape(self, new_length):
    """
    Reshapes the link instance to a new length.

    - If indices exceed new_length-1, they are removed with a warning.
    - If replacement operates beyond new_length-1, a warning is issued.
    """
    if new_length < self.length:
        invalid_indices = self.indices[self.indices >= new_length]
        if invalid_indices.size > 0:
            print(f"⚠️ Warning: Indices {invalid_indices.tolist()} are outside new length {new_length}. They will be removed.")
            mask = self.indices < new_length
            self.indices = self.indices[mask]
            self.values = self.values[mask]

    # Check if replacement would be applied beyond the new length
    if self.replacement == "repeat" and self.indices.size > 0 and self.length > new_length:
        print(f"⚠️ Warning: Repeat rule was defined for indices beyond {new_length-1}, but will not be used.")

    if self.replacement == "periodic" and self.indices.size > 0 and self.length > new_length:
        print(f"⚠️ Warning: Periodic rule was defined for indices beyond {new_length-1}, but will not be used.")

    self.length = new_length
def set(self, index, value)

Sets values at specific indices.

  • If index=None, resets the link with value.
  • If index is a scalar, updates or inserts the value.
  • If index is an array, updates corresponding values.
  • If value is None or np.nan, removes the corresponding index.
Expand source code
def set(self, index, value):
    """
    Sets values at specific indices.

    - If `index=None`, resets the link with `value`.
    - If `index` is a scalar, updates or inserts the value.
    - If `index` is an array, updates corresponding values.
    - If `value` is `None` or `np.nan`, removes the corresponding index.
    """
    if index is None:
        self.reset(value)
        return

    index = np.array(index, dtype=int)
    value = np.array(value, dtype=self.dtype)

    # check against _maxlength if defined
    if self._maxlength is not None:
        if np.any(index>=self._maxlength):
            raise IndexError(f"index cannot exceeds the number of layers-1 {self._maxlength-1}")

    # Handle scalars properly
    if np.isscalar(index):
        index = np.array([index])
        value = np.array([value])

    # Detect None or NaN values and remove those indices
    mask = np.isnan(value) if value.dtype.kind == 'f' else np.array([v is None for v in value])
    if np.any(mask):
        self._remove_indices(index[mask])  # Remove these indices
        index, value = index[~mask], value[~mask]  # Keep only valid values

    if index.size > 0:  # If there are remaining valid values, store them
        for i, v in zip(index, value):
            if i in self.indices:
                self.values[np.where(self.indices == i)[0][0]] = v
            else:
                self.indices = np.append(self.indices, i)
                self.values = np.append(self.values, v)

    # Update length to ensure it remains valid
    if self.indices.size > 0:
        self.length = max(self.indices) + 1  # Adjust length based on max index
    else:
        self.length = 0  # Reset to 0 if empty

    self._validate()
class restartfile

A container class for storing simulation restart data.

This class facilitates storing and restoring simulation parameters and results, allowing simulations to be resumed or analyzed after computation.

Methods:

copy(what) Creates a deep copy of various data types to ensure safety in storage.

Example:

restart = restartfile()
copy_data = restart.copy([1, 2, 3])
Expand source code
class restartfile:
    """
    A container class for storing simulation restart data.

    This class facilitates storing and restoring simulation parameters and results,
    allowing simulations to be resumed or analyzed after computation.

    Methods:
    --------
    copy(what)
        Creates a deep copy of various data types to ensure safety in storage.

    Example:
    --------
    ```python
    restart = restartfile()
    copy_data = restart.copy([1, 2, 3])
    ```
    """
    @classmethod
    def copy(cls, what):
        """Safely copy a parameter that can be a float, str, dict, or a NumPy array"""
        if isinstance(what, (int, float, str, tuple,bool)):  # Immutable types (direct copy)
            return what
        elif isinstance(what, np.ndarray):  # NumPy array (ensure a separate copy)
            return np.copy(what)
        elif isinstance(what, dict):  # Dictionary (deep copy)
            return duplicate(what)
        elif what is None:
            return None
        else:  # Fallback for other complex types
            return duplicate(what)

Subclasses

Static methods

def copy(what)

Safely copy a parameter that can be a float, str, dict, or a NumPy array

Expand source code
@classmethod
def copy(cls, what):
    """Safely copy a parameter that can be a float, str, dict, or a NumPy array"""
    if isinstance(what, (int, float, str, tuple,bool)):  # Immutable types (direct copy)
        return what
    elif isinstance(what, np.ndarray):  # NumPy array (ensure a separate copy)
        return np.copy(what)
    elif isinstance(what, dict):  # Dictionary (deep copy)
        return duplicate(what)
    elif what is None:
        return None
    else:  # Fallback for other complex types
        return duplicate(what)
class restartfile_senspantakar (multilayer, medium, name, description, t, autotime, timescale, Cxprevious, ntimes, RelTol, AbsTol, deepcopy=True)

Specialized restart file container for the senspatankar() migration solver.

This class stores the simulation inputs and computed results, enabling the resumption of a simulation from a saved state.

Attributes:

inputs : dict Stores all initial simulation inputs. t : float or None Simulation time at the stored state. CF : float or None Concentration in food at the stored state. Cprofile : Cprofile or None Concentration profile at the stored state.

Methods:

freezeCF(t, CF) Saves the food concentration CF at time t. freezeCx(x, Cx) Saves the concentration profile Cx over x.

Example:

restart = restartfile_senspatankar(multilayer, medium, name, description, ...)
restart.freezeCF(t=1000, CF=0.05)

constructor to be called at the intialization

Expand source code
class restartfile_senspantakar(restartfile):
    """
    Specialized restart file container for the `senspatankar` migration solver.

    This class stores the simulation inputs and computed results, enabling
    the resumption of a simulation from a saved state.

    Attributes:
    -----------
    inputs : dict
        Stores all initial simulation inputs.
    t : float or None
        Simulation time at the stored state.
    CF : float or None
        Concentration in food at the stored state.
    Cprofile : Cprofile or None
        Concentration profile at the stored state.

    Methods:
    --------
    freezeCF(t, CF)
        Saves the food concentration `CF` at time `t`.
    freezeCx(x, Cx)
        Saves the concentration profile `Cx` over `x`.

    Example:
    --------
    ```python
    restart = restartfile_senspatankar(multilayer, medium, name, description, ...)
    restart.freezeCF(t=1000, CF=0.05)
    ```
    """
    def __init__(self,multilayer,medium,name,description,
                 t,autotime,timescale,Cxprevious,
                 ntimes,RelTol,AbsTol,deepcopy=True):
        """constructor to be called at the intialization"""
        if deepcopy:
            inputs = {
                "multilayer":multilayer.copy(),
                "medium":medium.copy(),
                "name":restartfile.copy(name),
                "description":restartfile.copy(description),
                "t":restartfile.copy(t), # t is a duration not absolute time (it should not be reused)
                "autotime":restartfile.copy(autotime),
                "timescale":restartfile.copy(timescale),
                "Cxprevious":Cxprevious,
                "ntimes":restartfile.copy(ntimes),
                "RelTol":restartfile.copy(RelTol),
                "AbsTol":restartfile.copy(AbsTol)
                }
        else:
            inputs = {
                "multilayer":multilayer,
                "medium":medium,
                "name":name,
                "description":description,
                "t":t, # t is a duration not absolute time (it should not be reused)
                "autotime":autotime,
                "timescale":timescale,
                "Cxprevious":Cxprevious,
                "ntimes":ntimes,
                "RelTol":RelTol,
                "AbsTol":AbsTol
                }
        # inputs
        self.inputs = inputs
        # outputs
        self.t = None # no result yet
        self.CF = None # no result yet
        self.Cprofile = None # no result yet

    def freezeCF(self,t,CF):
        """Freeze the CF solution CF(t)"""
        self.t = t
        self.CF = CF

    def freezeCx(self,x,Cx):
        """Freeze the Cx solution Cx(x)"""
        self.Cprofile = Cprofile(x,Cx)

    def __repr__(self):
        """representation of the restart object"""
        if self.t is None:
            print("Restart file with no result")
        else:
            print(f"Restart file at t={self.t} with CF={self.CF}")
            print("Details of the profile:")
            repr(self.Cprofile)
        return str(self)

    def __str__(self):
        """Formatted representation of the restart object"""
        res = "no result" if self.t is None else f"solution at t={self.t}"
        return f"<{self.__class__.__name__}: {res}"

Ancestors

Methods

def freezeCF(self, t, CF)

Freeze the CF solution CF(t)

Expand source code
def freezeCF(self,t,CF):
    """Freeze the CF solution CF(t)"""
    self.t = t
    self.CF = CF
def freezeCx(self, x, Cx)

Freeze the Cx solution Cx(x)

Expand source code
def freezeCx(self,x,Cx):
    """Freeze the Cx solution Cx(x)"""
    self.Cprofile = Cprofile(x,Cx)

Inherited members